How to Audit Your Brand's Visibility Across ChatGPT, Gemini, and Perplexity

How to Audit Your Brand's Visibility Across ChatGPT, Gemini, and Perplexity
Auditing your brand's AI visibility starts with a simple question: when someone asks ChatGPT, Gemini, or Perplexity about your category, does your brand show up?

A step-by-step guide to understanding how AI search platforms see your brand, where you're being recommended, and where you're invisible.


Most brands have no idea how they appear inside AI-generated answers. They track their Google rankings. They monitor their social mentions. They measure their paid media performance. But when a potential customer asks ChatGPT, Gemini, or Perplexity to recommend a product or service in their category, they have no idea whether their brand shows up at all.

That blind spot is a problem, and it's growing. AI search platforms are increasingly where people start their research, form opinions, and make decisions. If you're not visible in those conversations, you're losing influence at a stage of the buying journey you might not even know exists.

An AI visibility audit fixes this. It gives you a baseline: here's how AI platforms currently perceive your brand, here's where you're being recommended, here's where you're absent, and here's what's driving those outcomes.

This guide walks through how to run one yourself. No special tools required for the initial audit, though dedicated monitoring platforms become valuable once you move from one-time assessment to ongoing tracking.

Step 1: Define your query set

Before you start prompting AI tools, you need to decide what to ask them. The goal is to simulate the questions your potential customers are actually asking when they turn to AI for help.

Build a list of 20 to 30 queries across three categories.

Category queries are broad questions about your space. If you sell project management software, these might include "What are the best project management tools?" or "What software do companies use to manage remote teams?" If you're a local plumber, these might include "How do I find a reliable plumber?" or "What should I look for in a plumbing company?"

Comparison queries pit you against competitors directly. "How does [your brand] compare to [competitor]?" or "What's the difference between [your brand] and [competitor]?" These reveal how the AI frames your strengths and weaknesses relative to the competition.

Problem queries describe a need without naming any brand. "I need a CRM that integrates with Gmail and costs less than $50 per month" or "My kitchen sink is leaking and I need someone who can fix it today." These are the highest-value queries because they represent a customer at the point of need, and the AI's recommendation carries significant weight.

Write your queries in natural, conversational language. People don't type keyword strings into ChatGPT. They describe their situation and ask for help.

Step 2: Run the queries across multiple platforms

Take your query list and run every question through at least four AI platforms:

ChatGPT (chat.openai.com) holds roughly 80% market share among AI chatbots. If you're only going to check one platform, check this one. Use the free tier for a baseline, but be aware that the paid version with web browsing enabled may produce different results.

Google Gemini (gemini.google.com) is increasingly integrated into Google's search experience through AI Mode. Its recommendations carry particular weight because they're connected to the broader Google ecosystem.

Perplexity (perplexity.ai) is notable because it provides source citations with every answer. This makes it especially useful for understanding which web pages are influencing AI recommendations in your category.

Claude (claude.ai) rounds out the major platforms. Its responses sometimes differ significantly from the others, which can reveal interesting patterns about how different models weigh different sources.

For each query on each platform, document three things: whether your brand was mentioned, what was said about you (the exact language matters), and which other brands were recommended.

Step 3: Build your visibility matrix

Take the data from Step 2 and organize it into a simple spreadsheet. Rows are your queries. Columns are the AI platforms. Each cell records whether your brand was mentioned (yes/no), the position of your mention (first recommended, listed among several, or mentioned as an alternative), and the sentiment (positive, neutral, or negative).

This matrix gives you an immediate visual picture of your AI visibility. You'll likely notice patterns: perhaps you're consistently recommended on Perplexity but absent from ChatGPT, or you show up for category queries but not for problem queries.

Calculate a simple share of voice by dividing the number of times you were mentioned by the total number of queries, per platform and overall. Do the same for your top three to five competitors. This comparison is where the audit starts becoming actionable.

Step 4: Analyze the citation sources

This step is where Perplexity becomes particularly valuable. Because Perplexity shows its sources, you can see exactly which web pages are influencing the AI's recommendations.

For every query where your brand was mentioned (or notably absent), look at the sources Perplexity cited. Are they pulling from your own website? From review sites? From industry publications? From comparison articles written by third parties?

For queries where competitors were recommended instead of you, look at their citation sources. This often reveals the gap: maybe a competitor has a comprehensive comparison page that gets cited, or strong presence on a review platform that you're absent from, or mentions in industry publications that you haven't been featured in.

You can approximate this analysis for the other platforms too, even though they don't show citations as explicitly. Ask follow-up questions like "What sources are you drawing from for that recommendation?" or "Why did you recommend [competitor] over [your brand]?" The AI will often explain its reasoning, giving you insight into what's driving its decisions.

Step 5: Audit your structured data

AI models are influenced by structured data, even if the connection isn't always direct. Schema markup on your website helps AI systems understand what your brand does, what products or services you offer, and how you relate to broader topics in your industry.

Check your website for the basics. Do you have Organization schema that clearly identifies your brand? Do your product or service pages have appropriate Product or Service schema? Do your articles have Article schema with proper author and publisher markup? Are your reviews marked up with Review schema?

Beyond your own site, check the consistency of your brand information across the web. AI models assign higher confidence to information they find corroborated across multiple sources. If your business name, description, and service offerings are consistent across your website, Google Business Profile, industry directories, review sites, and social profiles, the AI is more likely to trust and surface that information.

Inconsistencies create confusion. If your website says you serve one area but your Google Business Profile says another, or if your product descriptions differ between your site and third-party listings, the AI has less confidence in your data and may default to competitors with cleaner information.

Step 6: Identify your content gaps

By this point in the audit, you should have a clear picture of where you're visible, where you're not, and what sources are driving AI recommendations in your space. The final step is translating that picture into a content and optimization roadmap.

Common gaps include missing comparison content (the AI can't recommend you in a comparison if no credible source has compared you to competitors), thin topical coverage (if your website doesn't cover the topics the AI associates with your category, you won't appear in those conversations), weak third-party presence (if the sources the AI trusts for your category don't mention you, you're invisible regardless of how good your own site is), and inconsistent entity data (if the AI can't confidently connect your brand to the right topics and categories, it won't recommend you).

For each gap, define a specific action. If you're missing from a key review platform, get listed. If competitors are getting cited for comparison content you don't have, create it. If your structured data is incomplete, fix it. If third-party sources in your category don't mention you, that's a content placement and PR opportunity.

Making this an ongoing practice

A one-time audit gives you a baseline. But AI search visibility is dynamic. Models update their training data. New content gets published and indexed. Competitors improve their presence. The AI's recommendations shift over time.

The audit process described above can be repeated monthly with a reduced query set (10 to 15 core queries) to track trends. If the scale of your operation justifies it, dedicated AI monitoring tools like those from Semrush, Peec AI, Otterly AI, or LLMrefs can automate much of this and provide daily tracking.

The point is to treat AI search visibility as a measurable channel, not an abstract concept. You measure your SEO performance. You measure your paid media performance. You measure your social media performance. AI search visibility deserves the same discipline, because the audience using these platforms to make decisions is growing every month.

Start with the audit. Build your baseline. Then optimize against it.


James Calder is the editor of The Search Signal, covering AI-powered search, generative engine optimization, and the future of brand discovery.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Search Signal.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.