Modern web development loves “Hydration.” A server sends a skeleton HTML, and JavaScript “hydrates” it with interactivity and data. For AI agents, this is a nightmare.
The Cost of Rendering
Running a headless browser (like Puppeteer) to execute JavaScript and wait for hydration is computationally expensive. It allows for maybe 1 page fetch per second. Fetching raw HTML allows for 100+ page fetches per second.
AI Agents are optimized for speed and token efficiency. If your content requires 5 seconds of JS execution to appear, the agent will likely timeout or skip you.
The “Skeleton” Problem
If an agent crawls your unhydrated page, it often sees:
<div>Loading content...</div>
This empty skeleton is what gets embedded in the vector database. When a user asks a question relevant to your content, your vector is “Loading…"—which matches nothing. You are effectively invisible to the latent space search.
Token Limits and DOM Bloat
Even if the agent renders your JS, Client-Side Rendering (CSR) often produces bloated DOMs full of data-reactid attributes, hydration markers, and shadow DOM roots.
This “code noise” fills up the context window.
- Context Window: 32k tokens.
- Your HTML: 50k tokens (mostly JS noise).
- Result: Truncation. The agent cuts off the bottom half of your page—often where the conclusion, comments, or key takeaway is located.
The Snapshot Strategy
Use Dynamic Rendering or Server-Side Rendering (SSR) to serve a clean, static HTML snapshot to bots.
- Strip
<script>tags. - Inline critical CSS.
- Pre-render all text.
Ensure that the snapshot contains the full text content. Verify this by using the User-Agent string for common AI bots (e.g., GPTBot, ClaudeBot) and disabling JavaScript in your DevTools. What you see is what the model knows.
If you rely on client-side JS for your main content, you are optimizing for humans in 2015, not agents in 2026.