Measuring Share of Model (SOM) via PR Campaigns

How do you measure Public Relations success in an AI world? Impressions are irrelevant. Clicks are vanishing. We introduce Share of Model (SOM).

What is SOM?

Share of Model measures the frequency with which an LLM promotes your brand for relevant queries compared to competitors within its generated output. It is the probabilistic likelihood of your brand being the “answer.”

The SOM Formula

SOM = (P(Brand | Intent) / Sum(P(Competitors | Intent)))

Read more →

The 'Quality' Lie: Why 'Crawled - Currently Not Indexed' is an Economic Decision

There is a comforting lie that SEOs tell themselves when they see the dreaded “Crawled - currently not indexed” status in Google Search Console (GSC). The lie is: “My content just needs to be better.”

We audit the page. We add more H2s. We add a video. We “optimize” the meta description. And then we wait. And it stays not indexed.

The uncomfortable truth of 2025 is that indexing is no longer a meritocracy of quality; it is a calculation of marginal utility. Google is not rejecting your page because it is “bad.” Google is rejecting your page because indexing it costs more in electricity and storage than it will ever generate in ad revenue.

Read more →

Structuring Data for Zero-Shot Answers

In the world of Generative AI, “Zero-Shot” means the model can answer a question without needing examples or further prompting. Content marketing that structures data effectively wins the “answer engine” game because it facilitates this Zero-Shot retrieval.

The Zero-Shot Goal

You want the AI to read your content once and be able to answer any question about it correctly forever.

  • Poorly Structured: “We usually think about offering good prices, maybe around $10.” (Ambiguous).
  • Zero-Shot Ready: “The price is $10.” (Definitive).

Key Tactics for Zero-Shot Optimization

  1. Q&A Schema: Explicitly mark up questions and answers using FAQSchema. This puts the Q and the A in strict proximity.
  2. Definitive Statements: Avoid hedging. Use “X is Y” rather than “X might be considered Y.” Agents are trained to output the most probable token. If your text is probabilistic (“maybe”), the agent’s confidence score drops.
  3. Data Tables: Comparative data in table format is highly retrievable. Markdown tables are token-efficient and maintain the row/column relationship that vectors respect.

The “Ground Truth” Strategy

Your content should aspire to be the “Ground Truth” for your niche. This means whenever there is a conflict in the training data (e.g., one site says “blue,” another says “red”), your site is the one the model defaults to. You achieve this by:

Read more →

The Bedrock of Strategy: Geological Entities in Knowledge Graphs

Geological features are named entities. “Mount Everest” is an entity. “The San Andreas Fault” is an entity. “The Pierre Shale Formation” is an entity.

For researchers in the geospatial domain, linking your content to these distinct entities is the bedrock of MCP-SEO.

Disambiguation via Wikidata

“Paris” is a city in France. “Paris” is also a city in Texas. “Paris” is also a rock formation (hypothetically). To ensure an AI understands you are talking about the rock formation, you must link to its Wikidata ID (e.g., Q12345).

Read more →

Measuring 'Compute per Query' for Your Content

A new metric is emerging in the AI optimization space: Inference Cost. How much compute (FLOPs) does it take for a model to process, understand, and answer a question using your content?

This sounds abstract, but it translates directly to money for the AI provider.

  • High Entropy Content: Convoluted sentences, ambiguous grammar, poor structure. Requires more “attention heads” and potentially multiple passes (Chain-of-Thought) to parse. Cost: High.
  • Low Entropy Content: Simple, declarative sentences. Subject-Verb-Object. Cost: Low.

The Economic Bias

Models are optimized for efficiency. We hypothesize that retrieval systems will deprioritize sources that consistently require high inference compute. If your content is “hard to read” for the machine, it is expensive to serve.

Read more →

Optimizing Frontmatter for Retrieval

The metadata block at the top of a Markdown file, known as Frontmatter, is the most valuable real estate for MCP-SEO. It is structured data that sits before the content, framing the model’s understanding.

Beyond Title and Date

Most Hugo or Jekyll sites just use title and date. To optimize for retrieval, you should inject semantic richness here.

  1. summary: A dense 50-word abstract. Agents often read this first to decide if the full document is worth processing.
  2. keywords: Explicit vector keywords. “Neuroscience, synaptic, plasticity.”
  3. entities: A list of named entities. ["Elon Musk", "Tesla", "SpaceX"].
  4. complexity: “Beginner” | “Advanced”. Helps the agent match the user’s expertise level.

Example Frontmatter

---
title: "The Physics of Black Holes"
summary: "A technical overview of event horizons and Hawking radiation."
complexity: "PhD"
entities:
  - Stephen Hawking
  - Albert Einstein
tags: ["Astrophysics", "Gravity"]
---

The Retriever’s Shortcut

Many RAG systems index the Frontmatter separately or weight it heaver. By putting your core concepts in key-value pairs, you are essentially hand-feeding the indexer. You are saying, “This is exactly what this file is about.”

Read more →

Citation Flow in the Age of LLMs

In the era of PageRank, “Link Juice” or Citation Flow flowed through hyperlinks (<a> tags). It was a directed graph where node A voted for node B. In the era of Large Language Models (LLMs), the graph is semantic, and the “juice” flows through Co-occurrence and Attribution.

LLMs do not navigate the web by clicking links. They “read” the web during training. If your brand name appears frequently alongside authoritative terms (“reliable,” “expert,” “secure”) in high-quality text, the model learns these associations.

Read more →

Defining the New Standard for Machine-Readable Content

The World Wide Web was built on HTML (HyperText Markup Language). The “HyperText” part was designed for non-linear human reading—clicking from link to link. The “Markup” was designed for browser rendering—painting pixels on a screen. Neither of these design goals is ideal for Artificial Intelligence.

When an LLM “reads” the web, HTML is noise. It is full of <div>, <span>, class="flex-col-12", and tracking scripts. To get to the actual information, the model must perform “DOM Distillation,” a messy and error-prone process. We are witnessing the birth of a new standard for Machine-Readable Content.

Read more →

Labeling Synthetic Media: C2PA and Beyond

As the internet floods with AI-generated content, the premium on human authenticity skyrockets. But how do you prove you are human? Or, conversely, how do you ethically label your AI content to maintain trust? Enter C2PA (Coalition for Content Provenance and Authenticity).

The Digital Watermark

C2PA is an open technical standard that allows publishers to embed tamper-evident metadata into media files (images, standard video, and soon text logs). This “digital watermark” proves:

Read more →

Tools for Measuring Generative Visibility

You cannot improve what you cannot measure. But how do you measure visibility in a chat box? Traditional rank trackers (SEMrush, Ahrefs) track positions on a SERP. They do not track mentions in a generated paragraph.

The New Tool Stack

We are building tools to probe LLMs with thousands of permutations of a query to calculate Generated Share of Voice (GSV).

The Methodology

  1. Define a Query Set: “Best CRM,” “CRM software,” “Sales tools.”
  2. Permutation: Use an LLM to generate 100 variations of these questions (“What CRM should I use if I am a startup?”).
  3. Probe: Run these 100 queries across GPT-4, Claude 3.5, and Gemini via API.
  4. Extraction: Parse the text output. Extract Named Entities (NER).
  5. Frequency Analysis: Calculate the frequency of your brand’s appearance vs. competitors.

The “Share of Sentiment”

It is not just about frequency. It is about sentiment.

Read more →