Share of Model: The Only AI Visibility Metric That Actually Means Something
Share of Model measures the percentage of relevant queries on which an LLM recommends your brand. Why it replaces rank position as the definitive AI search success metric.
What is Share of Model?
Share of Model (SoM) is the percentage of relevant queries across a defined query set on which an LLM recommends your brand. If an AI model answers 300 relevant queries about your category, and your brand appears in responses to 180 of them, your Share of Model is 60%.
Traditional SEO had rank position. AI search has Share of Model. The conceptual shift matters: ranking implies a single, stable position on a list. Share of Model implies a probability distribution — how often does your brand appear when users ask the questions you care about?
Why the name matters
Why rank position fails as an AI search metric
The “rank #1 in ChatGPT” obsession is leading marketing teams to misallocate budget at scale. A brand can appear as the primary recommendation in ChatGPT's response to one specific prompt phrasing — and appear zero times across the hundreds of slight reformulations of that same query that real users actually type.
Consider a brand that ranks first for “best project management software” in a ChatGPT response. That same brand may be absent from responses to “top project management tools for remote teams,” “project management software comparison 2026,” and “what project management app should I use.” A rank-1 snapshot hides a near-zero Share of Model.
Rank position vs Share of Model compared
The four dimensions of a proper Share of Model score
A raw mention rate is the foundation, but a complete Share of Model score has four dimensions. Two brands with identical mention rates can have radically different business outcomes depending on how and where they are mentioned.
Dimension 1: Mention frequency
The base metric. Across a statistically valid sample of clean-room prompt runs (minimum 50, ideally 100+), what percentage of responses include your brand? This is measured across prompt variations — not just one prompt phrasing, but the full semantic neighborhood of queries relevant to your category.
A brand appearing 60% of the time across 300 prompt variations is in dramatically better shape than one hitting 100% on one specific prompt and 0% on slight reformulations. The 100%/0% pattern is a brittle dependency on exact phrasing. The 60% consistent rate is a genuine content authority signal.
Dimension 2: Citation prominence
Not all mentions are equal. A primary recommendation (“Brand X is the leading platform for...”) has completely different business impact from a passing reference (“some tools like Brand X also...”). Share of Model scores must weight prominence to be meaningful.
The prominence breakdown for an accurate SoM measurement: primary recommendation (highest weight), shortlisted option in a curated list, comparative entity in a vs. context, passing reference with no endorsement, and negative citation with caveats. A brand predominantly appearing in passing references has a low effective Share of Model despite high raw mention frequency.
Dimension 3: Intent coverage
The same brand can dominate informational queries (“what is X”) while being completely absent from decision-stage queries (“which X should I buy”). Intent coverage measures how evenly your brand appears across the five intent types that matter for business outcomes: informational, comparison, decision, local authority, and branded.
A brand with high informational coverage but low decision-stage coverage is losing the queries that drive conversions. Intent coverage gaps are the most actionable dimension of Share of Model — each gap points to a specific content type that needs to be created or strengthened.
Dimension 4: Cross-engine consistency
A brand that earns strong Share of Model on Perplexity but is absent from Gemini has a fragile position. Cross-engine consistency measures how evenly your brand is recommended across ChatGPT, Perplexity, Gemini, and Claude for the same query set.
Inconsistency across engines usually traces to training data distribution. Gemini has deeper access to Google's knowledge graph; Claude weights publisher content differently; Perplexity has stronger real-time search integration. The brands with the highest cross-engine consistency have invested in entity disambiguation across all citation source types — structured data, community content, and editorial coverage.
How to measure Share of Model at scale
Manual SoM measurement — running queries yourself in different AI tools and logging results — only produces dimension 1 data, imprecisely, and at an unsustainable time cost. Accurate SoM measurement at scale requires: a query library covering the full semantic neighborhood of your category, clean-room session management, statistical repetition per query, and structured output parsing to classify mention prominence automatically.
RankAsAnswer calculates Share of Model across all four dimensions using automated clean-room testing combined with structural signal analysis. The structural signal layer is particularly important — it gives you a leading indicator of where your SoM is heading before the LLM data catches up, enabling proactive improvement rather than reactive monitoring.
What good looks like
Based on data across thousands of brands measured in RankAsAnswer: a Share of Model above 40% across a brand's core query set represents strong AI visibility. Below 15% is effectively invisible. The median brand we measure starts around 18-22% and reaches 50%+ after 3-6 months of systematic AEO work.
The compounding effect