Most AI visibility conversations start with a single platform. An agency asks “is our client showing up on ChatGPT?” — and when the answer is yes, they call it done.
That’s the wrong question in 2026. A brand can appear consistently in ChatGPT while being completely invisible on Perplexity, invisible on Gemini, and invisible on Claude. Given that 71% of cited sources appear on only one AI platform, and that different AI platforms dominate different buyer segments, a single-platform snapshot gives you a dangerously incomplete picture.
The 4-Platform AI Citation Audit is the structured process for building the real baseline — mapping exactly where a client stands across all four major platforms, across the intent stages that matter, before any optimisation work begins. It’s the agency deliverable that turns AI visibility from an abstract concept into a client-ready report.
Why Four Platforms, and Why Now
The case for four-platform auditing comes down to three numbers.
71% — the share of cited sources that appear on only one AI platform (Profound research). If you audit only one platform, you miss the 71% gap story entirely.
41% — the rate at which ChatGPT, Perplexity, Gemini, and Claude agree on which brand to recommend for the same query. For 59% of queries, the platforms give different answers. A client visible on ChatGPT may be losing buyers on Perplexity.
7% — the share of cited sources that achieve cross-platform presence across all four platforms. The brands in that 7% are the category leaders in AI search. Showing a client where they sit relative to that benchmark is a powerful strategic conversation.
These numbers make the audit’s value self-evident before you’ve run a single prompt. The client either doesn’t know their cross-platform position (in which case the audit reveals it) or they do know (in which case the audit validates it). Either way, there’s a deliverable worth having.
Pre-Audit Setup: What You Need Before Running a Single Prompt
1. Build your prompt bank. Create 20–50 prompts spanning the client’s most important intent categories. Aim for approximately 75% unbranded (category-level queries that real buyers use) and 25% branded (direct brand queries). Unbranded prompts reveal true competitive position; branded prompts reveal how accurately AI describes the brand.
Structure your prompts to cover at least five of the nine TAPM intent layers:
| Intent Layer | Example Prompts to Build |
|---|---|
| Category Awareness | ”What are the best [category] tools for [buyer type]?” |
| Problem-Solution | ”How do [buyer type] solve [specific problem]?” |
| Comparison | ”[Client brand] vs [top competitor] — which is better for [use case]?” |
| Trust Validation | ”Is [client brand] reliable? What do users say?” |
| Transactional | ”How much does [client brand] cost for [team size/use case]?” |
2. Identify your top 3–5 competitors. Run the same prompts for competitors — not to track competitors in isolation, but because AI Share of Voice is competitive by nature. A client cited in 4 of 10 responses looks very different if their top competitor is cited in 8 of 10.
3. Decide on platforms. All four major platforms for comprehensive auditing: ChatGPT (auto mode for baseline behaviour), Perplexity, Gemini, Claude. If the client’s buyers have a clear primary AI platform, weight that platform more heavily in your prompt count but don’t skip the others — cross-platform gaps are the core finding.
4. Confirm robots.txt access. Before running: verify GPTBot, PerplexityBot, ClaudeBot, and Google-Extended are not blocked in the client’s robots.txt. A surprising share of brands unknowingly block AI crawlers — this is a quick fix that should precede any optimisation work.
The Audit Process: Step by Step
Step 1: Prompt Testing (Core)
Run every prompt across all four platforms in a systematic session. Document the following for each response:
Presence metrics:
- Is the client brand mentioned? (Yes / No)
- Where in the response? (First mention / Middle / End / Not mentioned)
- How is the brand described? (Capture the exact language — errors and misrepresentations matter)
- Is the brand cited with a source link, or just mentioned?
Competitive context:
- Which competitors are mentioned?
- How is the competitive landscape framed?
- Is the client brand the primary recommendation, a secondary option, or absent?
Source intelligence:
- What sources does the platform cite for its claims? (Record the actual URLs)
- Are these the client’s owned content, third-party directories, Reddit, Wikipedia, or something else?
Run each prompt at least twice per platform. AI responses are non-deterministic — a single run gives you one data point. Two or three runs gives you a frequency signal.
Step 2: Answer Share Calculation
Answer share (AI Share of Voice) is the core deliverable metric. Calculate it for each platform separately and in aggregate:
Per platform: Answer Share = (Number of prompts where client brand is mentioned ÷ Total prompts run) × 100
Weighted by intent: Not all prompt types carry equal weight. Category Awareness and Problem-Solution prompts are entry-point dominant — a mention here anchors the buyer’s AI context. Weight these prompts at 1.5× when reporting aggregate Answer Share. This accurately reflects strategic value.
Versus competitors: Calculate Answer Share for each competitor using the same prompt set. Present the competitive gap clearly — not as a ranking, but as a share:
| Brand | ChatGPT Answer Share | Perplexity Answer Share | Gemini Answer Share | Claude Answer Share |
|---|---|---|---|---|
| Client brand | X% | X% | X% | X% |
| Competitor A | X% | X% | X% | X% |
| Competitor B | X% | X% | X% | X% |
This table is the centrepiece of your client report. It makes the cross-platform position immediately legible.
Step 3: Citation Source Analysis
For every citation the AI platforms include in their responses, record the source domain. Aggregate this into a citation source map:
- Client-owned content: How much of the AI’s brand understanding comes from the client’s own website?
- Third-party controlled sources: Reddit, G2, Capterra, Trustpilot, Wikipedia — the sources the AI trusts that the client doesn’t control
- Outdated or inaccurate sources: Press releases from 2022, competitor comparison posts, inaccurate forum threads — sources the AI is using that could actively harm the client’s positioning
The citation source analysis answers a question most clients have never considered: when AI talks about my brand, what is it actually reading? In many cases, the AI is building its understanding of a brand from a G2 review page, an outdated TechCrunch mention, or a Reddit thread discussing a problem the brand has since fixed. Identifying this is immediately actionable.
Step 4: Entity Consistency Audit
AI platforms cross-reference brands across multiple sources to verify entity accuracy. Run a quick consistency check across:
- Website About/Home page description
- LinkedIn company profile description
- Wikipedia or Wikidata entry (if exists)
- Google Business Profile (if applicable)
- G2/Capterra profile descriptions
- Crunchbase profile
If the brand is described differently across these sources — different founding year, different core product description, different primary use case — AI systems treat this as a trust signal problem. Inconsistent entity signals lead to vague or inaccurate AI descriptions. Fixing entity consistency is one of the fastest citation quality improvements available.
Step 5: Gap Prioritisation
Not every gap needs equal attention. Prioritise your findings using this hierarchy:
Priority 1 — Category Awareness gaps on the client’s primary AI platform If the client is absent from Category Awareness prompts on the platform their buyers use most, this is the most strategically critical gap. These are entry-point prompts — buyers who encounter a competitor here carry that first mention through their entire research session.
Priority 2 — Competitive disadvantage on Comparison prompts If competitors are consistently named as the primary recommendation on Comparison prompts while the client appears second or is absent, buyers in active evaluation mode are being directed away. This is a near-purchase gap with direct pipeline implications.
Priority 3 — Citation source quality problems If the AI is building its brand understanding from outdated or third-party-controlled content, owned content optimisation is the fix. This is medium-priority but often the fastest win.
Priority 4 — Platform-specific complete absence If the client has zero presence on a given platform, investigate the cause before prescribing a fix. Is it a robots.txt block? A Stage 1 community discovery gap? A platform-specific source preference the client hasn’t addressed?
The Deliverable: What the Client Report Looks Like
The 4-Platform AI Citation Audit produces a structured report covering:
- Executive summary — three sentences: current cross-platform Answer Share, biggest competitive gap, top recommended action
- The Answer Share table — client vs. competitors across all four platforms, by intent layer
- The citation source map — what the AI is reading to build its brand understanding, categorised by source type
- Entity consistency findings — inconsistencies across brand descriptions across platforms
- The priority action list — gap-to-action mapping, ordered by strategic impact
This report is the starting point for every AI visibility engagement. Run it at onboarding to establish baseline. Re-run it quarterly to track movement. The delta between baseline and current quarter is your Share of Synthesis growth story — the evidence that AI optimisation work is moving the metric that matters.
Key Takeaways
- 71% of cited sources appear on only one AI platform. 41% agreement rate across platforms for the same query. 7% achieve cross-platform presence. These three numbers are the business case for four-platform auditing.
- The audit has five components: prompt testing, answer share calculation, citation source analysis, entity consistency audit, and gap prioritisation. Each produces a distinct client-facing finding.
- Answer Share is the core deliverable metric — calculated per platform and weighted by intent, with entry-point (Category Awareness) prompts at 1.5× weight.
- Citation source analysis answers the question clients never think to ask: when AI talks about my brand, what is it actually reading? The answer is often third-party, outdated, or inaccurate content the client doesn’t control.
- The Priority 1 gap is always Category Awareness absence on the primary buyer platform — entry-point dominance determines the entire downstream AI session.
For the platform-by-platform content strategy to close the gaps the audit reveals, see Each AI Platform Eats Different Content. For the client conversation that delivers the audit findings, see How to Explain Falling Traffic to a Client.
Return to the AI Search Agency Strategy Hub for the full framework.