How to measure share of voice in AI search (formula + setup)
Share of voice in AI search is a different number than Google SOV. The math, the surfaces, and a defensible weekly measurement workflow.
Share of voice (SOV) is a meaningful metric in AI search — but it's calculated differently than in SEO, paid media, or social. This post is the framework we use.
What share of voice means here
In traditional SEO, share of voice usually means: of all the organic search clicks happening in your category, what fraction go to you?
In AI search, there are no clicks. There are mentions. So share of voice becomes:
SOV (you) = mentions of your brand ÷ total competing-brand mentions, across a defined prompt set, over a defined window.
Three things to define carefully: the prompt set, the brand set, and the window.
Defining the prompt set
Don't compute SOV across "all queries about your category" — that's not measurable. Define a specific prompt universe.
A good prompt universe for B2B SaaS has roughly:
- 10–20 high-intent comparison prompts ("X vs Y", "best Z for [vertical]")
- 10–20 buyer-shortlisting prompts ("what's the best tool for [job]")
- 10–20 troubleshooting / decision prompts ("how do I evaluate [category]")
- 10–20 use-case prompts ("[category] for [specific persona]")
So roughly 40–80 prompts is enough to produce a stable SOV number. Fewer than 20 and the metric is noisy. More than 100 and you're paying for runs that don't move the score.
Tracemetry's Pro plan tracks 250 prompts per workspace, which is well into the stable-metric zone for most categories.
Defining the brand set
You need to decide who counts as a competitor for SOV math.
Include:
- Your top 3–5 direct competitors (same category, similar size, similar buyer)
- The category leader, even if you don't see them as a direct rival
- Any clear substitute (different mechanism, same problem solved)
Don't include:
- Every brand that ever appeared in any AI answer (you'd be diluted)
- Adjacent categories that aren't competing for the same decision
- Brands that appear in your answers because of unrelated context
Hide noisy mentions actively. In Tracemetry, every workspace has a "Competitor" list with approve/hide actions. Approving a competitor adds them to your SOV denominator; hiding them removes them.
Defining the window
AI answers change week to week — model updates, browsing-layer freshness, competitor moves. Compute SOV on a fixed weekly window:
- Run all tracked prompts once a week
- Aggregate mentions across the week's runs
- Report SOV as a 7-day rolling metric
Daily is too noisy. Monthly is too slow.
The math
For a defined prompt set P, a brand set B containing your brand plus k competitors, and window W:
mentions(brand) = count of prompt runs in W where brand appears in the answer
sov(brand) = mentions(brand) ÷ sum(mentions(b) for b in B)
In practice you also want to weight by prompt importance. A high-intent shortlisting prompt (e.g., "best CRM for nonprofits with under $500/mo budget") matters more than an informational one (e.g., "what does CRM stand for").
Tracemetry assigns intent weights automatically — but you can do this manually by labeling each prompt as top-funnel, mid-funnel, or bottom-funnel and weighting accordingly:
- Top-funnel: 1x
- Mid-funnel: 2x
- Bottom-funnel: 4x
Bottom-funnel prompts are where SOV affects revenue most directly.
Per-surface SOV
You'll often want to compute SOV separately per assistant:
- SOV on ChatGPT
- SOV on Claude
- SOV on Perplexity
- SOV on Gemini
The numbers can diverge significantly. We've seen B2B SaaS customers with 45% SOV on Perplexity and 12% on ChatGPT for the same prompt set. That divergence is itself a signal — usually Perplexity rewards them because they have a strong recent content motion, and ChatGPT lags because they're not yet in the training data heavily.
Reporting per-surface SOV separately tells you which assistant to prioritize work on.
What SOV looks like in practice
A few patterns we see:
- Category leaders: 40–70% SOV across their prompt set. Stable. Hard to move quickly.
- Top-3 in category: 15–30% SOV. Mobile — can swing 5–10 points in a quarter with focused work.
- Top-10 challengers: 3–12% SOV. Highest leverage to move — work compounds visibly week over week.
- Just-getting-started: 0–5% SOV. Often the first 4 weeks of focused content work can move this to 15–25%.
The leverage is highest where the starting SOV is lowest.
Common measurement mistakes
Three things to avoid.
Counting yourself in noisy contexts
If a prompt says "what is a CRM?" and the answer mentions your brand only as an aside ("similar products include..."), counting that as a "mention" inflates your number.
The fix: parse mentions with confidence, and gate low-confidence mentions for manual review. Tracemetry routes ambiguous mentions to a /visibility/uncertain inbox where you approve or reject.
Using too-broad a brand set
If your competitor set includes every brand that's ever appeared in any answer, your SOV looks tiny — but the math is meaningless. The fix is to define the competitor set carefully and stick to it.
Cherry-picking prompts
Running 10 prompts you know you win and reporting "we have 80% SOV" is unfalsifiable. Define the prompt universe up front, run all of it, report the aggregate. Don't add prompts after the fact to inflate the number.
Tracking it without a tool
You can compute SOV manually:
- Make a spreadsheet with 25–40 prompts in the leftmost column
- Every Friday, paste each prompt into ChatGPT, Claude, and Perplexity
- For each cell, record
your_brand: yes/no,competitor_1: yes/no, etc. - Sum and compute
Doable. Tedious. Takes ~3 hours a week for a 30-prompt set.
Tracking it with Tracemetry
Submit your domain at /audit — we'll run the first three prompts and show you what SOV looks like as a snapshot.
For continuous tracking, the Pro plan ($199/mo) runs 250 prompts × 4 surfaces every week, computes SOV by surface and by buyer stage, and dashboards the weekly delta. Cancel any time.
What you actually do with the SOV number is the same either way: identify which prompts you're losing, write content that wins them, re-measure next week. The metric is just the scoreboard.
See your own AI visibility today.
Free public report. 60 seconds. No signup. Or get started on Pro to track 250 prompts continuously.
More in AI visibility measurement
Posts in the same cluster — they link up to the pillar and across to each other so the topic compounds for AI search.