A recent article by Dejan.ai titled “Google Just Quietly Dropped the Biggest Shift in Technical SEO” has been making the rounds. While we respect Dejan’s history in the industry, their analysis of WebMCP suffers from a classic “Web 2.0” bias.

They view WebMCP primarily as a Discovery Mechanism. We argue it is an Execution Mechanism. And that distinction changes everything.

What is WebMCP?

For the uninitiated, vast confusion surrounds this term.

MCP (Model Context Protocol) is the open standard (maintained by Anthropic and others) that defines how AI models interact with data and tools. It is a specification for the JSON-RPC messages exchanged between a host (like Claude Desktop) and a server.

WebMCP is the transport layer that puts this on the open web. Instead of running a local server process (stdio), WebMCP uses HTTP and Server-Sent Events (SSE) to expose these tools over the internet.

  • MCP: The grammar of the conversation.
  • WebMCP: The telephone line.

You can read the full specification at spec.modelcontextprotocol.io. Dejan.ai treats them interchangeably, which leads to the first major error.

The “Sitemap Fallacy”

Dejan.ai compares WebMCP to “Structured Data” and “Sitemaps.” They write:

“Tool discoverability is the new indexing problem… You’ll want your tools found, understood, and preferred over competitors’.”

This assumes the primary goal of an agent is to find a tool. But in an Agentic world, discovery is trivial. The bottleneck is Trust and Safety.

The article completely glazes over the security implications. It encourages SEOs to “markup your forms” to make them agent-ready. This is dangerous advice without a corresponding discussion on Tool Auth.

The Gatekeepers: cats.txt and Auth

Opening your database to any agent on the web is suicide. The ecosystem relies on two gates that Dejan.ai ignores:

  1. CATS.TXT (Common Agent Transport Standard): A discovery manifest (likely living at /.well-known/cats.txt) that points to your WebMCP endpoints and defines which agents are allowed to “handshake.”
  2. Tool Authentication: You cannot just expose an endpoint. You need an OAuth flow or API Key exchange (part of the MCP connection initialization) to verify the agent’s identity.

If you expose your “Add to Cart” form as a WebMCP tool without rate limiting or agent-verification, you are inviting a DDoS attack, not a customer. Bots can get stuck in loops, retrying failed tool calls thousands of times a minute. Who pays for that compute? You do.

Missing the “Agentic Loop”

Dejan’s analysis focuses on the start of the interaction (discovery). It ignores the loop.

WebMCP isn’t just about telling an agent “I have a tool.” It’s about the return value.

  • Dejan’s View: Optimization = Better Descriptions (to get the click).
  • Our View: Optimization = Better Return Types (to complete the task).

If your tool returns a string “Success”, the agent is confused. If it returns a JSON object { status: "confirmed", order_id: "123" }, the agent can proceed to the next step. Dejan’s guide treats the output of the tool as an afterthought.

“Agentic CRO” is a Red Herring

The article coins the term “Agentic CRO” (Conversion Rate Optimization). While catchy, it misapplies human psychology to mathematical models. Agents don’t have “psychology.” They have Objective Functions.

You don’t “persuade” an agent with “positive descriptions” (as Dejan suggests). You constrain an agent with precise Typescript definitions.

The Anatomy of a Tool Description

This is not a theoretical debate. We can see how agents select tools in the research from Anthropic and OpenAI on Tool Grounding. They rely on semantic similarity between the user’s prompt and the tool’s description field, but they rely on the inputSchema to determine if they can call it.

The “Marketing” Approach (Bad)

{
  "name": "check_price",
  "description": "Get the best amazing prices for our incredible products! You will love them."
}

Result: The agent ignores the fluff. It doesn’t know what product to check.

The “Agentic” Approach (Good)

{
  "name": "check_product_price",
  "description": "Retrieves current price and stock status for a specific SKU. Returns 404 if SKU not found.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "sku": { "type": "string", "pattern": "^[A-Z]{3}-\\d{3}$" }
    },
    "required": ["sku"]
  }
}

Result: The agent knows exactly what to allow (a SKU format) and what to expect.

The “copywriting” advice in the article is actively harmful. You shouldn’t write “Click here for amazing deals!” in a tool description. You should write “Returns list of products sorted by price ascending.”

The Hierarchy of Control

To understand why “SEO copywriting” fails here, you must understand how an agent prioritizes information. When an LLM evaluates a tool, it follows a strict hierarchy of control:

  1. Input Schema (Hard Constraint): “I must provide a string matching this regex.” If the model cannot generate arguments that satisfy the schema, it will not call the tool.
  2. Tool Description (Semantic Grounding): “This tool does X.” The model uses this to calculate the semantic similarity between its current goal and the tool’s capability.
  3. Function Name (Structural Hint): get_price vs calculate_shipping. A weak signal, but useful for disambiguation.

Dejan.ai focuses entirely on layer #2. But if you fail layer #1 (by using loose schemas), layer #2 doesn’t matter. Optimization must start at the Schema.

Constructive Guide: Optimizing for the Machine

If you want to do “WebMCP SEO,” forget keywords. Focus on Interface Design.

  1. Typed Inputs: Never use string when you can use an enum. If a tool takes a status, define ["active", "inactive"]. This prevents the agent from hallucinating invalid parameters.
  2. Deterministic Outputs: Agents hate ambiguity. Return structured JSON, not natural language sentences.
  3. Error Handling: When an agent fails, return a precise error message (“SKU not found”) rather than a generic 500, so the agent can self-correct (“I should ask the user for the SKU”).
    • Bad: { "error": "Failed" }
    • Good: { "error": "SKU not found: SKU-123 does not exist in catalog" }

Common WebMCP Mistakes

Warning: Do Not Do This

  • Using string when you mean enum: Don’t ask for “status”. Ask for "pending" | "shipped".
  • Pagination without Navigation: Returning a list of 10 items without a next_token or page_number leaves the agent stranded.
  • Silent Failures: Returning an empty list [] when an ID was invalid. Throw an error so the agent knows it made a mistake.
  • Timeout Expectations: Agents are slow. If your API takes 30 seconds to run a report, use an async pattern (return a job_id and a check_status tool) rather than keeping the WebMCP connection open.

Testing Your Tools

How do you verify your WebMCP tool is actually “Agent Ready”? You cannot just rely on curl. You need to simulate the cognitive loop.

  1. Prompt Simulation: Open Claude (or your agent of choice). Paste your raw Tool JSON into the chat context.
  2. The “Find” Test: Ask user-centric questions (“How much is the Sony camera?”). Does it select your tool?
  3. The “use” Test: Does it construct valid JSON inputs based on your schema?
  4. The “Error” Test: Force a failure (give it a fake SKU). Does the agent recover and ask for a correction, or does it hallucinate a success?

Conclusion

Dejan.ai identifies the trend correctly (WebMCP is huge), but misdiagnoses the nature of the beast. They see it as another tag to implement for Google rankings. In reality, it is a fundamental shift in software architecture. Treat it like code, not content.