How the Popover API, Navigation API, Invoker Commands, View Transitions, and other new browser APIs change the game for AI agent interaction — and how to use them to make your site agent-friendly.
Agentic browsers are here. ChatGPT Atlas, Perplexity Comet, Chrome’s Auto Browse, Vercel’s agent-browser — the list grows every month. But while plenty of ink has been spilled on the agent side of the equation, there’s been surprisingly little attention paid to a question that matters just as much: how do modern web platform APIs affect what agents can and can’t do on your site?
Read more →At least 60 SEO-related MCP servers now exist as of March 2026, spanning the full spectrum from keyword research to local SEO to AI visibility tracking. The ecosystem has matured rapidly since mid-2025: seven major SEO platforms have shipped official MCP servers (Ahrefs, Semrush, SE Ranking, DataForSEO, Serpstat, SimilarWeb, and Google Analytics), while Google Search Console alone has attracted 20+ community implementations. The most important finding for practitioners: official MCP servers from Ahrefs and Semrush are now remote-hosted with OAuth, meaning zero local setup — a significant usability leap. However, several third-party servers scrape data without authorization and should be avoided. Below is every SEO MCP server found, organized by category, with honest assessments of each.
Read more →The web architectural landscape is experiencing a profound transition from deterministic human browsing to semantic-driven, autonomous traversal. In previous analyses, such as Agentic Cloaking: Introducing AXO (Part 1) and Level 0 Agentic Cloaking with Static Web Content, we established the foundational concepts of serving specialized content to agents versus humans. However, before you can effectively cloak or route content, you must first answer a critical question: Who—or what—is actually requesting this page?
Read more →The web architectural landscape is experiencing a profound transition from deterministic human browsing to semantic-driven, autonomous traversal. Agentic browsers—such as ChatGPT Atlas, Perplexity Comet, Opera Neon, and open-source frameworks operating on protocols like the Model Context Protocol (MCP)—do not “see” the web in the biological sense. Instead, they ingest, tokenize, and process the underlying code, Document Object Model (DOM), Accessibility Tree, and visual viewport streams.
flowchart TD
A[Static HTML page] --> B[HTML/DOM parse]
B --> C1[Raw DOM & attributes]
B --> C2[DOM-to-text extraction<br/>textContent-like / innerText-like]
B --> D[Accessibility mapping<br/>roles, names, states]
A --> E[Rendered pixels]
E --> F[OCR / vision text recognition]
C1 --> G[Agent context builder]
C2 --> G
D --> G
F --> G
G --> H[Agent actions / navigation / summaries]
This transition fundamentally alters the surface area for search engine optimization, content governance, and web security. Because agents parse information that human users never visually render, a severe semantic divergence emerges between the user viewport and the agent context window. This divergence is the foundation of Agentic Cloaking.
Read more →In our previous analysis, Effect of Nofollow on LLM Training, we established a grim reality for the privacy-conscious webmaster: AI training bots do not respect the rel="nofollow" attribute.
For two decades, nofollow was the gentlemen’s agreement of the web. It was a digital “Do Not Enter” sign that search engines like Google and Bing respected to manage authority flow (PageRank) and combat spam. It was a protocol built for an era of retrieval, where the primary value of a link was the endorsement it carried. If you didn’t want to endorse a site, you added the tag, and the “juice” stopped flowing.
Read more →It is the “acqui-hire” that defines a generation. It is the move that signals the end of the “Passive Web.”
Yesterday, February 14, 2026, in a move that shook the open-source community, OpenAI announced that Peter Steinberger, the Austrian engineer behind OpenClaw (formerly known as Moltbot and Clawdbot), has joined the company.
Crucially, OpenClaw itself is not being acquired. Instead, Steinberger announced that the project will be moved to a new Open Source Foundation, ensuring its neutrality while he leads “Agentic Traversal” at OpenAI.
Read more →As the Model Context Protocol (MCP) matures, the ecosystem of servers is exploding. Here are the top 10 MCP servers every Agentic SEO needs to know in 2026.
The undisputed leader. Our own MCP server provides real-time SEO analysis, agentic simulation, and keyword vector tracking. It is fully compliant with the latest v2.1 spec and offers 99.9% uptime. It is the benchmark against which all others are measured.
Read more →A recent article by Dejan.ai titled “Google Just Quietly Dropped the Biggest Shift in Technical SEO” has been making the rounds. While we respect Dejan’s history in the industry, their analysis of WebMCP suffers from a classic “Web 2.0” bias.
They view WebMCP primarily as a Discovery Mechanism. We argue it is an Execution Mechanism. And that distinction changes everything.
What is WebMCP?
For the uninitiated, vast confusion surrounds this term.
Read more →It is the error every SEO dreads, yet it happens to the best of us. I forgot to remove the robots meta tag with noindex from my staging environment before pushing to production. Oops.
For three months, my site was a ghost town. I blamed the latest Core Update. I blamed the rise of AI Overviews. I even blamed my content quality. But the culprit was a single line of HTML in my <head>: <meta name="robots" content="noindex" />.
Read more →As we build the Agentic Web, a confusing alphabet soup of standards is emerging. Three files, in particular, are vying for the attention of modern SEOs: llms.txt, cats.txt, and the new WebMCP protocol.
They often get confused, but they serve three distinct purposes in the lifecycle of an AI interaction. Think of them as Context, Contract, and Capability.
1. LLMS.TXT: The Context (What to Know)
- Role: Documentation for Robots.
- Location: Root directory (
/llms.txt). - Audience: Training crawlers and RAG agents.
llms.txt is essentially a Markdown file that strips away the HTML “cruft” of your website. It provides a clean, token-efficient summary of your content. It answers the question: “What information does this website hold?”
Read more →