A procurement officer at a regional logistics company is looking for a freight audit software vendor. She doesn't open Google. She opens Perplexity, types "best freight audit software for mid-market shippers," and reads the three companies it names. She visits two of their websites, schedules demos with both, and four weeks later signs a contract with one of them. The third company in her category, the one that would have been a perfect fit, never appeared. Its website ranked on the second page of Google for the right keywords. Its content was good. Its sales team was excellent. It simply did not exist in the place where this particular buyer started looking. The company had no idea it was invisible there, because it had never measured its presence in AI search at all.
This is not a hypothetical. It is a pattern I see across mid-market businesses right now, and it is accelerating. AI search is not a feature Google added to its existing product. It is a separate visibility surface with its own logic, its own ranking signals, and its own list of winners and losers. Most mid-market businesses have never measured their standing on that surface. This paper gives you the diagnostic to do it, engine by engine, in a single afternoon.
1. The visibility you can't see
Traditional search visibility is measurable in tools every marketing team already has. You know your keyword rankings. You know your organic traffic. You can pull a position report for any term you care about and see exactly where you stand against competitors. That measurement infrastructure does not exist, by default, for AI search. There is no equivalent of Google Search Console telling you how often ChatGPT mentioned your brand last month.
This creates a blind spot with a specific shape. When a buyer uses a traditional search engine and you rank on page two, you have data showing that. You can see the gap, estimate the traffic cost, and make a business decision about whether to close it. When a buyer uses an AI search engine and you are simply not cited, you have nothing. No impression data. No rank position. No signal at all. The buyer moved through an entire research cycle, formed opinions about your category, and eliminated you from consideration before you ever knew they were looking.
The buyers doing this are not marginal. Perplexity crossed 15 million daily active users in early 2025. ChatGPT's search functionality handles hundreds of millions of queries per month. Google AI Overviews appear on 15 to 20 percent of all searches, with the highest concentration on commercial-intent queries. Bing Copilot is the default experience in Microsoft 365. The population starting a research process in one of these seven engines and never reaching traditional search results grows every quarter.
The cost of that invisibility is not abstract. It is revenue flowing to businesses that figured out measurement before you did. The diagnostic in Section 8 gives you a way to find out exactly where you stand. But first you need to understand how each engine works, because they are not the same product and they do not respond to the same inputs.
2. AI search is a separate surface, not a Google feature
The common mistake is treating AI search as a variation of SEO. The logic goes: if I rank well on Google, the AI engines will find my content and cite me. This is wrong in ways that have real consequences.
Traditional search engines index pages and rank them for specific keyword queries. The ranking signals are well-understood: domain authority, page authority, on-page optimization, backlink profiles, technical health. A page optimized for "freight audit software" has a reasonable shot at ranking for that phrase, and if it ranks, it gets impressions.
AI search engines do something different. They take a natural language question, synthesize an answer from multiple sources, and cite a small number of those sources in the response. The synthesis step is the part that breaks the traditional SEO analogy. A page that ranks number three for a keyword does not automatically make it into the synthesized answer. The AI engine is asking a different question: "Is this source authoritative enough on this specific topic that I would cite it to a user who asked me to explain the topic?" That question rewards breadth of coverage, depth of explanation, structural clarity, and the presence of the kind of evidence a researcher would use, such as specific numbers, named examples, and clear attributions. It penalizes thin content, promotional language, and pages that rank by manipulating signals the AI does not care about.
The practical implication: your SEO performance and your AI search performance are correlated but not identical. You can rank on page one for a competitive keyword and be cited zero times in AI search results. You can have a single well-structured long-form guide that barely cracks page three on Google and get cited consistently by Perplexity. The signal sets overlap in some areas (high-authority domains tend to do well in both) and diverge sharply in others (thin affiliate-optimized content does well in some Google contexts and very badly in AI search).
Seven engines are currently material enough to measure. Each has a distinct audience, a distinct use pattern, and a distinct citation logic. Treating them as one surface will give you a blurry picture. Treating them separately gives you a diagnostic you can act on.
3. Perplexity: where research-mode buyers go
Perplexity's user base skews toward people doing deliberate research rather than casual browsing. These are buyers in the evaluation phase, not the awareness phase. They are not asking "what is freight audit software," they are asking "which freight audit software vendors handle LTL shipments for companies doing 5,000 to 10,000 shipments per month." The query specificity is higher, which means the citation relevance is also higher. A business that gets cited in a Perplexity response for that kind of query is being named to a buyer who is already qualified.
Perplexity uses live web search to generate responses, so content freshness matters more here than on Google. A blog post from 2021 that still ranks on Google may not appear in a Perplexity response if a competitor published a 2025 version of the same content. Pages need to be crawlable, up to date, and structured in a way that makes them easy to excerpt.
To measure your Perplexity presence: write 10 to 15 queries a qualified buyer in your category would actually use. Make them specific and conversational. Run each in a private browser session to avoid personalization effects. Record which brands appear in the response text and which get cited as source links. Count how many of the 15 queries produced a citation for your business. That ratio is your Perplexity citation rate. Under 20 percent on relevant queries is an actionable gap. Under 5 percent means you are effectively invisible to research-mode buyers on the engine where they concentrate.
What good looks like: a mid-market B2B business with a strong content program should realistically target a 30 to 50 percent citation rate on well-matched queries within 6 to 12 months of focused investment. Getting there from zero typically requires publishing more substantive long-form content, not more frequent short-form content.
4. ChatGPT: the volume engine
ChatGPT has the largest installed base of any AI assistant. It is where the broadest cross-section of buyers, from enterprise procurement to small business owners, are forming opinions about categories, vendors, and options. The volume matters: if you are invisible in ChatGPT, you are invisible to the largest audience currently using AI to inform purchasing decisions.
ChatGPT's citation behavior splits across two modes. The base model draws on training data and will not surface content published after its knowledge cutoff. The web-search-enabled version (ChatGPT Plus and ChatGPT Search) fetches live results and behaves more like Perplexity. These are different measurement questions with different remedies.
To measure base model presence: run 5 to 10 category-level prompts with web browsing disabled. Prompts like "what are the leading vendors for [your category]" or "which companies do [your specific service] for [your target customer type]." Record whether your company is named, described accurately, and characterized positively or neutrally. If you do not appear in any of 10 relevant queries, you have a training-data presence problem that sustained content publishing over the next 6 to 12 months will partially address.
To measure search-mode presence: enable web search and run the same queries. Record citation rates the same way you would for Perplexity. The numbers often differ significantly, because content published after the training cutoff only surfaces in search-mode responses.
The trap I see most often: companies assume that because they have a Wikipedia page, a LinkedIn profile, and decent Google rankings, ChatGPT knows who they are. Sometimes it does. Often it does not, or it knows a version of the company that is two years out of date, or it conflates the company with a similarly named competitor. Run the test before assuming the answer.
5. Google AI Overviews and AI Mode
Google AI Overviews (formerly Search Generative Experience) appear at the top of search results pages for a significant fraction of commercial-intent queries. They pull from Google's index but do not simply promote the top-ranked page. They synthesize from multiple sources and cite a small set. Being in an AI Overview for a high-intent query is more valuable than ranking number one for the same query, because the AI Overview occupies more visual space and loads before the user can scroll to organic results.
Google launched AI Mode in 2025 as a full-page AI search experience similar in format to Perplexity, with adoption growing fastest among buyers already comfortable with AI search tools.
To measure your AI Overview presence: run 10 to 15 buyer-intent queries in Google from a logged-out private browser. Record whether an AI Overview appears, and if it does, whether your business is cited. Google Search Console also reports AI Overview impressions for verified sites, which gives you volume data rather than spot-check observations alone.
The key asymmetry here: AI Overviews strongly prefer sources already ranking in the top 5 to 10 organic results. A business with strong SEO has a structural advantage. A business with weak organic rankings is penalized at two levels, lower SEO rank and lower citation selection rate. Fix organic SEO first for this engine, then measure AI Overview citation separately.
6. Claude, Gemini, and Bing Copilot
These three engines are smaller by usage share than the first four but collectively material enough to measure, and each reaches a distinct audience segment.
Claude has a user base weighted toward knowledge workers, analysts, and technical buyers. Its standard interface draws from training data rather than live web results, so recent content publishing does not influence it in real time. Test it the same way you would test ChatGPT's base model: run category queries and record whether your business appears, how it is described, and whether the description is accurate.
Gemini (Google's standalone AI assistant, separate from AI Overviews) has live search integration and generally follows citation logic similar to AI Overviews. Strong organic SEO correlates with stronger Gemini citation rates. Its audience skews toward Google Workspace users, which in B2B terms means it reaches operations and finance buyers embedded in Google's ecosystem.
Bing Copilot is the default search experience in Microsoft 365, Edge, and Windows Search. That default placement means a large share of corporate knowledge workers encounter it without ever deciding to use AI search. If your buyers work inside Microsoft 365 environments, they are likely using Bing Copilot regardless of whether they identify as AI search users. Test it separately from Google because the Bing index weights signals differently enough that rankings diverge in meaningful ways.
To measure all three: run your same 10 to 15 query set across each interface. Note citation presence, accuracy of description, and whether your competitors are cited in responses where you are not. The cross-engine comparison often reveals that a business is well-known in some AI contexts and unknown in others, which gives you a prioritization signal for where to focus remediation first.
7. What "invisible" actually costs (with dollar math)
Invisibility in AI search is not a vanity problem. It is a revenue problem with a measurable dollar range, and the range is large enough that most mid-market CFOs would treat it as material if they saw it calculated.
Based on behavioral research published in early 2025, 30 to 55 percent of B2B buyers now use at least one AI search engine during their research process for purchases above $10,000. That share is growing at 8 to 12 percentage points per year. For a business in an active-research category, AI search is already touching the majority of significant deals.
Now apply a conservative influence factor. Being cited in AI search correlates with a 15 to 25 percent higher probability of making the consideration set for that buyer. Being absent does not guarantee elimination, but it means starting without the credibility signal that citation provides.
Concrete example for a mid-market B2B company with $8M in annual revenue: 25 new-logo deals per year at $45,000 average. Assume 50 percent are researched through AI search, so 12 to 13 deals annually touch AI search somewhere in the buyer journey. If AI invisibility reduces your consideration-set probability by 20 percent on those deals, that is 2 to 3 deals per year where you start at a structural disadvantage before your first sales conversation. At a 30 percent win rate, that costs approximately $270,000 to $400,000 in annual revenue. Not from buyers who chose a competitor. From buyers who never evaluated you at all.
That range rises significantly in high-research-intensity categories (enterprise software, legal services, specialized consulting) and falls in categories with short, low-research buying cycles. Run the math against your own deal values to get the number that belongs in front of your CFO.
The secondary cost is compounding. A vendor consistently cited in AI responses builds a reputational presence in how AI engines understand the category. A vendor never cited stays unknown. The gap compounds quarterly, which means the business that starts measuring today has a structural head start over the one that waits until the revenue impact is visible in the numbers.
8. The seven-engine diagnostic (a repeatable checklist)
This diagnostic is designed to be run by one person in a single afternoon, with no specialized tools. You need a spreadsheet, a private browser window, and accounts for each of the seven engines (most are free). Run it quarterly to track movement over time.
Step 1: Build your query set. Write 15 queries a qualified buyer in your category would realistically use when researching their options. Include category-level queries ("best [your category] vendors for [your customer type]"), problem-level queries ("how do I [the problem you solve]"), and comparison queries ("alternatives to [your top competitor]"). These should be phrased conversationally, the way a person talks to an AI assistant, not the way someone formats a Google keyword.
Step 2: Run the query set across all seven engines. Engines to test, in order of current volume: ChatGPT (base, no web search), ChatGPT (web search enabled), Perplexity, Google AI Overviews, Google AI Mode, Gemini, Bing Copilot. Record each response in a spreadsheet column.
Step 3: Score each response. For each query-engine combination, record three things: whether your business was mentioned in the response text (yes/no), whether it was cited as a source link (yes/no), and whether a direct competitor was cited when you were not (yes/no). That third column is your competitive displacement signal.
Step 4: Calculate your citation rates. Overall citation rate is mentions divided by total query-engine combinations tested. Per-engine rates tell you where the gap is worst. Competitive displacement rate tells you who is filling the space you are not.
Step 5: Audit your top-cited competitors. For any competitor appearing in more than 30 percent of responses where you did not, spend 20 minutes on their site. What content are they publishing that you are not? What structural features do they have (specific statistics, comparison tables, use-case breakdowns, clear authorship) that yours lacks? That is your remediation roadmap.
- Target citation rate on relevant queries: 25 percent or above is a competitive position. Below 10 percent is an actionable gap.
- Per-engine benchmarks: Perplexity and Google AI Overviews tend to have the highest citation rates for well-optimized B2B content. ChatGPT base model tends to have the lowest for companies that have not been publishing consistently for at least two to three years.
- Competitive displacement: if a direct competitor appears in more than half of the queries where you do not, that competitor has a structural AI visibility advantage that will compound.
- Run the diagnostic again in 90 days after any content investment to measure movement. AI search rankings can shift faster than organic SEO rankings because the content index is fresher.
What this diagnostic does not tell you: why you are invisible in technical terms, or which specific content changes will move the needle fastest. Those questions require a deeper audit of your content architecture, structured data, domain authority, and how your content maps to buyer intent. The diagnostic tells you the size and shape of the gap. Closing it is a separate engagement.
9. What to do this week
Run the diagnostic. It costs an afternoon and no budget. Build the 15-query set for your category today, run it across the seven engines this week, and score the results. You will have your baseline citation rate within five business days.
Above 25 percent on relevant queries is a defensible position; the work from here is incremental. Below 10 percent means the gap is already costing you revenue and will cost more as AI search share grows. Apply the dollar math from Section 7 to your own deal values to build the case for your CFO or marketing director.
The businesses pulling ahead are not doing anything exotic. They are publishing substantive long-form content that answers real buyer questions with specific numbers and named examples. They are maintaining consistent publishing cadences so AI engines encounter fresh signals on a regular basis. The businesses falling behind are treating AI search as a future problem rather than a current measurement gap.
If you want a structured starting point, the AI Advantage Audit includes an AI visibility diagnostic built into the readiness assessment. It goes deeper than the seven-engine checklist in this paper and gives you a prioritized remediation list rather than just a citation rate. If you already have your citation rate and want to scope an engagement to close the gap, the Scope Sketcher walks you through what a 90-day AI visibility program looks like at three investment levels, from a content-only approach to a full structured-data and citation-building program.
And if you want to talk through what your specific numbers mean and what the realistic improvement timeline is for your category, the contact page is the right next step. Bring your diagnostic results, your top three competitors' URLs, and a rough sense of your average deal value. A 30-minute scoping call will tell you whether the gap is large enough to prioritize now, which engines to focus on first, and what content investment is required to move the citation rate into competitive range.
You are either cited or you are not. Most businesses in your category have not measured which. Run the diagnostic and find out before your competitors do.
