For the last decade, the mantra of content marketing has been “Long-Form Content.” Creating 3,000-word “Ultimate Guides” was the surest way to rank. But as the consumers of content shift from bored humans to efficient AI agents, this strategy is hitting a wall. The new metric of success is Information Density.

The Context Window Constraint

While context windows are growing (128k, 1M tokens), they are not infinite, and more importantly, “reasoning” over long context is expensive and prone to “Lost in the Middle” phenomena.

If you write a 3,000-word article where the core answer is buried in paragraph 45, an agent might:

  1. Truncate the text before reaching the answer.
  2. Lose the specific detail amidst the noise.
  3. Rank the content lower for “helpfulness.”

The Fluff Penalty

AI models are trained to predict the next token. When they encounter “fluff”—repetitive introductions, rhetorical questions, or filler adjectives—their perplexity actually drops (because it’s predictable), but the informational value per token plummets.

High-quality content for AI should feel like a dense academic abstract or a technical manual.

Writing for the Agent

  • Front-load the Value: Adhere to the BLUF (Bottom Line Up Front) principle. Put the answer in the first 50 words.
  • Structure over Narrative: Use bullet points, data tables, and numbered lists. These are easier for an attention head to parse than long, winding paragraphs.
  • Remove Transitional Phrasing: Agents don’t need “Let’s dive in,” “Furthermore,” or “Check this out.” These are wasted tokens.

The “Token-to-Fact” Ratio

We propose a new metric for audit: The Token-to-Fact Ratio. How many tokens does it take to convey one unique piece of information?

  • Bad: 500 tokens / 1 fact. (Typical recipe blog)
  • Good: 50 tokens / 1 fact. (Wikipedia)
  • Agent Ideal: 20 tokens / 1 fact. (JSON-LD or specialized documentation)

Future SEOs will be editors who ruthlessly cut words to save the robot’s time.