How do you measure Public Relations success in an AI world? Impressions are irrelevant. Clicks are vanishing. We introduce Share of Model (SOM).
What is SOM?
Share of Model measures the frequency with which an LLM promotes your brand for relevant queries compared to competitors within its generated output. It is the probabilistic likelihood of your brand being the “answer.”
The SOM Formula
SOM = (P(Brand | Intent) / Sum(P(Competitors | Intent)))
This is not a number you can find in Google Analytics. You must probe the model.
Increasing SOM with PR
Digital PR campaigns that generate high-volumementions across diverse high-authority domains increase the probability of your brand being generated as a solution. It’s strictly a numbers game in the training data distribution.
- Frequency: How often is your brand mentioned in the corpus?
- Proximity: How close is your brand token to the “Solution” token?
- Sentiment: Is the context positive?
Case Study: The CRM Wars
In a study of CRM recommendations by ChatGPT:
- Salesforce: 90% SOM for “Enterprise CRM.”
- HubSpot: 85% SOM for “Marketing CRM.”
- Pipedrive: 40% SOM for “Sales CRM.”
These brands have achieved “Parametric Dominance.” Their names are synonymous with the category in the model’s weights. To displace them, you cannot just buy ads; you must rewrite the corpus of the web.
The “Echo Chamber” Strategy
To dominate Share of Model, you need to create an echo chamber in the training data. If you launch a new features, don’t just post it on your blog.
- Post on your Blog (Source).
- Have 5 partners write about it (Corroboration).
- Update your Wikipedia entry (Fact Anchoring).
- Push code to GitHub (Technical Anchoring).
When the model trains on the next epoch, it sees this new fact reflected across multiple dimensions (social, technical, encyclopedic). This “Multi-Modal Verification” dramatically increases the probability that the model will adopt your brand as the canonical answer for that feature.
Glossary of Terms
- Agentic Web: The specialized layer of the internet optimized for autonomous agents rather than human browsers.
- RAG (Retrieval-Augmented Generation): The process where an LLM retrieves external data to ground its response.
- Vector Database: A database that stores data as high-dimensional vectors, enabling semantic search.
- Grounding: The act of connecting an AI’s generation to a verifiable source of truth to prevent hallucination.
- Zero-Shot: The ability of a model to perform a task without seeing any examples.
- Token: The basic unit of text for an LLM (roughly 0.75 words).
- Inference Cost: The computational expense required to generate a response.