How Do I Run a 7-Engine AI Visibility Check on My Own Business?
How-To Guide

How Do I Run a 7-Engine AI Visibility Check on My Own Business?

Jake McCluskeyBeginner25 min
Back to guides

Most small business owners I talk to have spent real money on SEO. They show up on page one of Google. Their Google Business Profile is complete, their reviews are solid, and their website has been professionally done. Then a buyer opens ChatGPT, types "best [category] in [city]," and the business is not in the answer. Not mentioned. Not cited. The buyer picks someone from the list the AI gave them.

This is the AI visibility gap, and it is not fixed by traditional SEO. The engines that more and more buyers are using first are not Google. They are Perplexity, ChatGPT, Gemini, Claude, and Bing Copilot. Each one pulls from different sources, weights authority differently, and cites different businesses. If you have never checked what they say about your category, you do not actually know where you stand.

This guide walks through a manual 7-engine probe you can run today in 25 minutes with no paid tools and no technical help. You will come out with a citation score, a ranked list of which engines do and do not surface your business, and a clear picture of the gap you are dealing with. The AI Visibility Gap white paper goes deeper on why the gap exists and the structural changes driving it. This guide is the hands-on starting point.

Why this matters for small businesses specifically

The conventional wisdom is that AI search matters for enterprise brands, not the local accountant or regional manufacturer. That was true two years ago. It is not true now. A 2024 survey by BrightEdge found that AI-generated answers were appearing in over 50% of informational searches in the United States, and that share is growing. When a business owner asks Perplexity "who handles payroll for companies under 50 employees in Phoenix," the engine gives three names. Those three businesses win the contact. Everyone else does not.

Small businesses are actually more exposed to this shift than large ones, because they cannot buy their way in. Enterprise brands have thousands of web mentions, deep backlinking, and years of structured data feeding AI training sets. The 12-person HR consulting firm has its website, its Google Business Profile, and whatever content it has published. If that content base is thin or inconsistent, the engines pass right over it. Running this check yourself is the first step to understanding where you actually stand, not where you assume you stand.

What the 7-engine probe actually does

The probe is a structured set of queries run across seven AI engines, each logged in a simple scoring sheet. The goal is not to understand how the engines work technically. The goal is to find out whether your business, or businesses like yours in your category, appear in the answers buyers are getting.

The seven engines cover two distinct types of AI answers:

  • Live-web AI engines (Perplexity, Bing Copilot, Google AI Overviews, Google AI Mode): these pull from current web content and tend to cite sources in real time. Good at surfacing recent, locally-present businesses.
  • Training-data AI engines (ChatGPT, Claude, Gemini): these draw from large language model training data, which has a knowledge cutoff and reflects accumulated web authority over time. Less current but often more influential for category-level queries.

Think of the probe as a buyer simulation. You are asking what a real buyer would ask, reading what the buyer would read, and noting whether you appear.

Before you start

What you need:

  • Seven browser tabs or windows. Logging into each service takes a few minutes if you do not have accounts. Free-tier accounts work for all seven.
  • A spreadsheet or notebook with columns for: engine name, query 1 result (mentioned / not mentioned), query 2 result, query 3 result, citation count, notes.
  • Your five probe queries written before you start. See the section below on building those.
  • Your city or metro and your specific category. "Marketing agency in Nashville" is a probe that tells you something. "Marketing agency" is too broad.

One thing to settle before you go any further: what you do with the results. This probe is a diagnostic, not a guarantee. It shows you the gap. Closing the gap involves content, citations, and sometimes structured data work. If you want to read why that work matters before you start, the AI Visibility Gap white paper covers the mechanics. If you want to jump straight to the numbers first, run the probe and then read the white paper.

For now, the general compliance point: as you run this check, you are sending queries, not data. Do not paste customer lists, internal pricing sheets, or employee records into any AI engine during this process. Keep the probe query-only. We have a dedicated compliance section below covering data hygiene for the next steps.

Task 1: Build your five probe queries

The single most common mistake in running a visibility check is using queries that do not match what buyers actually type. Most business owners ask something like "[their business name] in Google" or "[exact category] near me." Buyers ask differently.

The failure pattern: a Nashville-based benefits consulting firm runs a probe with the query "[Firm Name] benefits consulting Nashville" and concludes they are invisible because the AI does not mention them by name. The actual problem is that buyers are not searching for them by name. They are searching "benefits consultants for companies under 100 employees Nashville" or "who handles employee benefits setup for a growing small business."

What to ask an AI tool to build your five queries:

I run a [category and brief description] in [city/region]. My typical buyer is [describe the decision-maker: title, company size, buying situation]. Write five search queries this buyer would type into an AI engine when they are in the early stage of finding a vendor. Make the queries sound like real buyer language, not marketing language. Vary the format: include at least one question format ("who" or "what"), one category-plus-location format, and one problem-description format.

Run that prompt in any AI engine. Take the output, edit it until the queries feel accurate, and write them down. These five queries are what you paste into each of the seven engines. Keep them consistent across all seven engines. That consistency is what makes the results comparable.

For scoring: a mention counts if the engine names your business, cites your website, or refers to a category leader in your specific area in a way that clearly points to you. A near-miss (mentioning your category but not you) is noted separately. Out of 35 possible citations (7 engines x 5 queries), your score is the number of times you were actually mentioned.

Task 2: Run the live-web engine probe (Perplexity and Bing Copilot)

Start with Perplexity and Bing Copilot. These two engines run live web searches, cite sources directly, and tend to surface local and niche businesses more reliably than the training-data engines. They are also the most forgiving for a business with a recent content push but limited historical authority.

For Perplexity: go to perplexity.ai. Paste your first query. Read the answer. Note whether your business is mentioned and which source Perplexity cited. Scroll down to the "Sources" section and see which sites Perplexity pulled from. If your business is not mentioned, look at which sites are cited and note them in your sheet. Repeat for all five queries.

For Bing Copilot: go to bing.com/chat. Paste the same five queries in the same order. Bing Copilot is powered by Microsoft's integration of GPT-4 and the Bing web index. The citation pattern differs from Perplexity because Bing's index is structured differently from what Perplexity searches.

What to look for in the results:

Perplexity probe template: Paste each query directly. After the answer appears, note: (1) is my business mentioned by name? (2) if not, which businesses are mentioned? (3) which sources appear in the sidebar? (4) do any of those sources link to my website, my Google Business Profile, my Yelp page, or a directory where I am listed?

If you are not cited but a competitor is, click the competitor's cited source and note what it is. Often it is a review site, a directory listing, or a piece of content the competitor published. That is your gap to close.

Task 3: Run the Google AI probe (AI Overviews and AI Mode)

Google is running two AI answer formats simultaneously, and they work differently. AI Overviews appear as a box above traditional search results on many queries. Google AI Mode is a separate search experience that gives a more conversational, longer-form answer.

For AI Overviews: go to google.com, sign in with a Google account, and search each of your five queries. Look for the AI-generated box that appears at the top of results. Not every query triggers an AI Overview, but informational and recommendation queries usually do. Note whether your business is mentioned in the AI Overview, whether the AI cites a source that includes your business, and whether the traditional search results (below the AI box) include your site.

For Google AI Mode: in the Google search bar, look for the "AI Mode" option (it appears as a toggle or tab in the Google search interface for users in the United States). Switch to AI Mode and run the same five queries. Google AI Mode is newer and tends to give more detailed, multi-paragraph answers with richer citation patterns.

What to look for:

Google AI probe template: For each query, note whether an AI Overview box appeared (yes/no), whether your business appeared in that box, whether your Google Business Profile appeared in the map pack below it, and whether your website appears in the traditional results below the map pack. Repeat for AI Mode.

Google AI Overviews pull heavily from Google Business Profile data, structured markup on your website, and sources Google's Knowledge Graph already trusts. If your Google Business Profile is incomplete, your chances of appearing in AI Overviews for local queries drop significantly. Check your profile while you are running this step.

Task 4: Run the training-data engine probe (ChatGPT and Claude)

ChatGPT and Claude draw from large language model training data. They are less dependent on what was published last week and more dependent on what has been consistently present across the web for years. A business with a newer web presence or recent content push may not appear here yet, even if it shows up in Perplexity.

For ChatGPT: go to chat.openai.com. Use the free tier or ChatGPT Plus. Paste your first query. Read the full answer. Note whether your business is named, whether it lists competitors by name, and whether it describes your category in a way that includes or excludes you.

For Claude: go to claude.ai. Run the same five queries. Claude's training data and web-search capabilities differ from OpenAI's, so the results may differ meaningfully.

A key difference between these engines and the live-web engines: ChatGPT and Claude may give answers that are confident but outdated. If your business moved, rebranded, or changed focus in the last 18 months, the training-data engines may still reflect your older state. Note that in your scorecard.

What to ask to get the most useful results:

For each query, if neither ChatGPT nor Claude mentions your business by name, follow up with: "Are you aware of [your business name] in [city]? What can you tell me about them?" If the engine says it has no information, that is diagnostic. If it has partial information that is outdated, note what is wrong. That tells you what training data needs to be refreshed by building newer, authoritative web content.

Do not argue with the engine or try to correct it during the probe. You are documenting the current state, not changing it.

Task 5: Run the Gemini probe and compile your score

Gemini (Google's AI engine at gemini.google.com) is distinct from Google AI Overviews. Gemini is a standalone AI assistant that uses Google's model and a mix of training data and live search. It tends to surface different results than Google AI Overviews and is worth checking separately because buyers who use Google's AI products may land in Gemini rather than in search.

For Gemini: go to gemini.google.com. Sign in with your Google account. Run the same five queries. Note the same data points: named mentions, competitor names surfaced, sources cited if any.

After Gemini, compile your full scorecard. Here is the scoring frame:

Scoring template: For each engine (Perplexity, Bing Copilot, Google AI Overviews, Google AI Mode, ChatGPT, Claude, Gemini), count how many of your five queries returned a mention of your business. Sum across all seven engines for a total citation count out of 35. Divide by 35 for your citation rate percentage.

What the score means in practice:

  • 0-5 citations (0-14%): your business has very low AI visibility. Buyers in your category using AI engines are not finding you. This is a priority gap.
  • 6-14 citations (17-40%): partial visibility. You are showing up in some engines on some queries. There is a real signal to build on, but the inconsistency means many buyers still miss you.
  • 15-25 citations (43-71%): solid visibility. You are a consistent presence across most engines on your best queries. The work now is filling the gaps in specific engines or query types.
  • 26-35 citations (74-100%): strong visibility. You are appearing consistently. The work now is maintaining quality and monitoring for new engines or query formats.

For a faster read, the free GEO Visibility Checker runs this same diagnostic automatically and gives you the score with specific gap analysis.

The small-business prompts that actually work for AI visibility checks

After running visibility probes for dozens of small businesses, the difference between a diagnostic that surfaces real gaps and one that produces misleading results comes down to four prompt decisions.

Specify the buyer's situation, not just the category. "Best accountant in Denver" surfaces different results than "accountant who specializes in retail businesses in Denver." Your buyers are not searching generic categories. They are searching their specific situation. If you only check generic queries, you miss the queries where your differentiation should matter most.

Include the geography every time. AI engines treat "who are the top HR consultants" as a national query. Your buyers mean local. Put the city in every query. This especially matters for Perplexity, Bing Copilot, and Google AI Overviews, which pull heavily from local web sources.

Run the follow-up query. If an engine does not mention your business in the initial answer, ask a direct follow-up: "What do you know about [business name] in [city]?" The answer to that follow-up tells you whether the engine knows you exist at all, or whether your category knowledge is just thin in the initial answer.

Note the citation sources, not just the names. Every time a competitor is cited, check which source the engine pulled from. Review sites, directory listings, local news mentions, and trade publication features show up repeatedly as citation drivers. Your gap is usually a source gap, not a content quality gap.

The small-business compliance non-negotiables

This section is short because the rule is simple, but it is the most important section in this guide.

Do not put any of the following into the consumer tier of any AI engine during or after this check:

  • Customer names, contact information, or purchasing history
  • Employee records, HR files, or compensation data
  • Internal pricing sheets, margin data, or vendor contracts
  • Unreleased product plans, pending bids, or confidential proposals
  • Financial statements or banking information
  • Any data covered by a customer NDA or confidentiality agreement

The practical workflow that respects these rules: during the probe itself, you are only typing queries that any buyer could type. That is safe. After the probe, when you start building content or prompts to improve your visibility, keep a clear line between what is public-safe (category expertise, process descriptions, general case examples) and what is private-only (named clients, specific financials, proprietary methods). AI tools can help you build content that improves visibility without ever touching the private side of the line.

For employment-related content specifically: if AI helps you draft materials about your team, hiring practices, or workplace culture for visibility purposes, make sure what goes out is accurate and consistent with what you would tell a job applicant. AI-generated content about an employer that overstates benefits, misrepresents the work environment, or contradicts actual employment terms can create compliance exposure with state labor boards and the FTC.

IP ownership: content you create with AI assistance is generally owned by you, but review your AI vendor's terms of service. Some consumer-tier tools retain rights to use your inputs for training. If you are creating proprietary frameworks or methodologies through AI sessions, use a Business tier account with a Data Processing Addendum.

If your business has signed an Anthropic Business agreement, an OpenAI Enterprise agreement, or a Microsoft Azure OpenAI agreement with a Data Processing Addendum, the data rules are different for that platform. Ask your IT lead or general counsel what is covered before running any AI workflow that involves customer data. Do not assume.

When NOT to run this as a one-time check

The 7-engine probe is a snapshot. There are situations where the snapshot misses the point.

  • When your category is extremely local and niche. A single-location pet groomer in a town of 4,000 may not appear in AI engines at all, and that may be fine if buyers in that market do not use AI engines to find groomers. Check whether your actual buyers use AI search before treating the citation gap as urgent.
  • When your business name matches a large brand. If you run a business called "Apex" anything, the AI engines will surface the national "Apex" brands first. The gap is real but the solution is differentiation in how you describe yourself online, not just content volume.
  • When you just launched or rebranded. Training-data engines like ChatGPT and Claude may reflect your old identity for another six to twelve months. The live-web engines (Perplexity, Bing Copilot) will update faster, but even those need time to index new content at scale. A freshly-rebranded business should run the probe, note the baseline, and rerun in 90 days rather than treating the first results as a verdict on the rebrand.
  • When you have a single data point. AI engines are non-deterministic. Running the probe once on a Tuesday afternoon and concluding you are invisible because you scored 2 of 35 is not the same as running it three times across a month and seeing consistent results. One data point tells you to look harder. A trend tells you what is actually happening.

A simple rule: this probe is an unfair advantage on the 80% of businesses where citation visibility is a real buyer-journey factor. For the 20% where buyers still find you through referral, phone book habit, or walk-in traffic, the gap matters less than the probe score suggests.

The quick-start template

Here is the probe scaffold. Copy it, fill in the brackets, and run each query across all seven engines.

I am looking for [buyer description: role, company size, buying situation]. I need [specific outcome: category + location]. Who are the [top options / leading providers / best choices] in [city or region] for [specific niche or specialization]?

Alternative format: What [category] in [city] would you recommend for [specific situation or requirement]?

Problem-description format: I am a [buyer descriptor] and I am trying to solve [specific problem]. Which [category] businesses in [region] are worth talking to?

Follow-up (when your business is not mentioned): What do you know about [your business name] in [city/region]?

Build your five queries before you open the first engine. Run all five on each engine before moving to the next. Keep a consistent order across all seven so your results are comparable. Log citation counts and source notes as you go, not after.

Bigger wins beyond the first probe

Once you have your citation score and gap map, the next layer of work is where the score actually moves.

Build a monthly monitoring routine. A single probe tells you where you are. Monthly probes tell you whether what you are doing is working. Track your citation rate by engine over time, not just overall. An improvement in Perplexity that has not reached ChatGPT yet tells you your recent content is indexing but has not made it into training data refreshes. That distinction drives different tactics.

Trace each citation gap to its source gap. For every engine that does not surface your business, look at which sources it does cite. Those sources are the distribution channels feeding that engine's answers. If Bing Copilot consistently cites a specific local business directory and you are not listed there, getting listed is a two-hour task that directly addresses the gap. Often the fix is simpler than the symptom suggests.

Publish content that answers your probe queries directly. Each of your five probe queries is a buyer question. If your website does not have a page or post that directly answers that question, you are invisible to the engines that search the live web. A 700-word post that answers one probe query well is worth more for AI citation purposes than a 5,000-word overview that answers it obliquely. Write the direct answer, publish it, and see where it lands in your next monthly probe.

Read the AI Visibility Gap white paper before you build a response plan. The probe gives you the score. The AI Visibility Gap white paper gives you the structural picture: why different engines weight different signals, what the content and citation patterns are across industries, and what the gap looks like for businesses that have closed it versus ones that have not. Running the probe without reading the deeper analysis is like getting a lab result without the context of what the numbers mean.

The small-business AI consulting connection

This is one check on one dimension of how AI is changing how buyers find businesses. The bigger picture for small and mid-market owners is that the buyer journey has changed structurally. Buyers are doing their first research in AI engines before they ever visit a website, ask for referrals, or read a Google review. Businesses that appear in that first AI answer get a conversation. Businesses that do not are competing for the buyer's attention after someone else already made a first impression.

For small business owners thinking through where AI fits in the broader business strategy, not just the visibility question, the AI Consulting for Small Business page covers the full picture: which AI investments actually move the needle for businesses under $20M in revenue, the common adoption failure modes, and what an engagement looks like for a business ready to act.

Closing

Running this probe for the first time is often the moment an owner realizes the gap is real. They show up in three of thirty-five citations and understand, concretely, that when buyers are using AI engines to find vendors in their category, they are not in the conversation. That clarity is worth more than another month of wondering whether AI search matters for businesses like theirs.

Run the probe tonight. Score it. Write down the three highest-priority gaps. Then decide what the first concrete step is: a directory listing, a blog post that answers a buyer question, a Google Business Profile update. Small, specific, trackable.

If you want to talk through how AI fits into your business at the program level, not just the visibility question, the AI Consulting for Small Business page lays out the full picture and how an engagement works.

Want this built for you instead?

Let's talk about your AI + SEO stack

If you'd rather skip the how-to and have it shipped for you, that's what I do. Start a conversation and we'll figure out the fastest path to results.

Let's Talk
Questions from readers

Frequently asked

Do I need paid accounts on all seven AI engines to run this check?

No. You can run the full probe on free tiers across every engine except one. Perplexity, Gemini, Claude, and Bing Copilot all have free tiers that show AI-generated answers. ChatGPT free tier shows AI answers for most general queries. Google AI Overviews and Google AI Mode are visible to any logged-in Google user at no cost. The one upgrade worth considering: ChatGPT Plus gives you access to the newer GPT-4o model, which tends to surface local and niche businesses more reliably than the free tier. If you want the most realistic picture of what a paying buyer sees, a $20 ChatGPT Plus subscription for one month is a reasonable investment. For a first diagnostic, though, free tiers across all seven get you 80% of the picture, and that is enough to identify whether a gap exists and how serious it is.

Is it safe to put my business information into these AI engines when running the check?

Yes, for this specific task. You are typing queries about your business as a buyer would, not feeding the AI tools private internal data. You are asking ChatGPT "who are the top accounting firms in Denver for small business?" not pasting your client list into it. The AI engines will use their existing training data and web indexes to answer, which is exactly what a real buyer would trigger. Where caution applies is the next layer: if you later build content or prompts to improve your visibility, do not paste proprietary customer data, internal pricing logic, unreleased product details, or employee records into consumer-tier AI tools. Keep the diagnostic phase query-only and you are operating within standard data hygiene for consumer AI tools.

Will the results look the same every time, or do AI engines give different answers to the same query?

Different answers, every time. AI engines are non-deterministic. The same prompt on the same engine run two hours apart can produce different citations. That is part of why a single check is a snapshot, not a verdict. For a first diagnostic, it is enough to run each query once. For a recurring visibility measurement, run the same seven-engine probe once per month and track your citation rate over time. The trend matters more than any single result. If you appear in zero of seven probes in month one, two of seven in month two, and four of seven in month three, you are moving in the right direction regardless of day-to-day variation. The free [GEO Visibility Checker](/geo-check) at Elite AI Advantage automates this tracking so you are not running the manual version every month yourself.

What does it mean if my business shows up in Perplexity but not in ChatGPT?

It means the engines are pulling from different sources. Perplexity runs live web searches and cites sources in real time, so your presence there reflects your current indexed web footprint: recent blog posts, press mentions, directory listings, and review sites. ChatGPT (especially on the free tier) draws more heavily on training data, which has a knowledge cutoff and may not reflect your most recent content. A business that shows up in Perplexity but not ChatGPT usually has recent web content but thin historical authority. The fix is building consistent, substantive content over time so your business appears in training data refreshes, not just live searches. The reverse pattern (ChatGPT yes, Perplexity no) is rarer and usually signals older authority that has not been refreshed recently.

What if my competitors show up and I do not? What does that tell me?

It tells you the engine has enough information about your competitors to surface them confidently and not enough about you. AI engines surface businesses when they have seen sufficient consistent evidence: mentions across multiple sources, structured data, authoritative content. If three competitors appear and you do not, start by studying what they have that you do not. Search each competitor by name in Google and look at their backlink profile using a free tool like Moz Link Explorer. Check whether they have more reviews on Google Business Profile, Yelp, or industry directories. Check whether they publish substantive blog content or are quoted in trade publications. The gap is usually visible in 20 minutes. You do not need to outrank them everywhere to show up in AI answers. You need enough consistent signal for the engine to trust that you are a real, credible option.

My business is hyperlocal. Will AI engines ever surface local businesses, or is this just for national brands?

Local businesses can and do appear in AI engine answers, but the query framing matters. Perplexity and Bing Copilot tend to surface local businesses well because they run live web searches and index Google Business Profile, Yelp, and local directories. ChatGPT and Claude are weaker on local specifics unless you include the city in the query. When you run your own probe, always include your city or metro in the query: "best HVAC company in Colorado Springs" not "best HVAC company." Local visibility is also more dependent on review volume and recency than national brand visibility. A local plumber with 200 current Google reviews is more likely to surface in a local AI query than one with 12 reviews from 2019. Google Business Profile completeness is a direct input to Google AI Overviews, which matters a lot for local search.

I am not technical. Is this realistic for me to run on my own?

Yes. This is a browser-and-notebook task. You need seven browser tabs (or windows), a spreadsheet with eight rows, and 25 minutes. No API access. No code. No developer. The seven engines all have free web interfaces. You type a query, read the answer, note whether your business is mentioned. That is the whole mechanical process. The harder part is writing the five queries that represent what your actual buyers ask, and reading the gaps in your results accurately. That is judgment, not technical skill. If you get through the probe and are not sure how to interpret what you found, the [GEO Visibility Checker](/geo-check) runs the same logic automatically and gives you a plain-English score with specific next steps.

How often should I run this check, and what score should I be aiming for?

Run a full 7-engine probe once a month for the first three months to establish a baseline, then quarterly once you have a clear trend. What counts as a good score depends on your category. In highly contested niches (accountants, HVAC, personal injury law), appearing in four of seven engines with consistent citation is strong. In lower-competition niches, appearing in six of seven is achievable within six months of deliberate work. A citation rate under 30% (two or fewer of seven engines) on your primary buyer query is a red flag worth addressing now, because buyers who ask AI engines for options in your category are not hearing your name. The [AI Visibility Gap white paper](/white-papers/ai-visibility-gap-seven-engines) has industry-specific benchmarks if you want a more precise target for your niche.

GUIDED IMPLEMENTATION

Want help running this in your business?

The guide above is the playbook. If you'd rather have someone walk it through with you (or just build the thing), book a 30-min scoping call. We'll map your stack, name the realistic timeline, and tell you straight if it's a fit.