Something is going to go wrong with your AI workflow. Not might. Will. The only question is whether you have a plan in place when it does, or whether you spend the first two hours figuring out who to call and what to say.
I have watched business owners freeze when AI gives a customer a confident, completely wrong answer. I have watched managers try to trace where a confidential file went after someone pasted it into the wrong AI tool. I have watched project managers scramble to explain why AI-generated numbers in a client deck were off by 30%. In every case, the problem was not the AI failure itself. It was the absence of any plan for what to do next.
This guide walks you through building a one-page runbook covering the six failure modes non-technical businesses actually hit. Six categories, each with a 60-minute response you can run without a security team or IT department. You will spend 20 minutes reading this guide and another 30 minutes filling in the template for your specific business.
Before you start, see the companion white paper The AI Incident Playbook, which goes deeper on each failure mode if you want more context behind the decisions this guide asks you to make.
Why this matters for small business operators specifically
Large enterprises have incident response programs: security operations centers, legal teams on retainer, compliance staff running quarterly tabletop exercises. When something goes wrong with AI, they have a playbook, a chain of command, and a communications team.
Most small and mid-size businesses have none of that. What they have is a team using AI every day, often without any training on what to do when the output is wrong or data goes somewhere it should not. The gap between "we use AI" and "we know what to do when AI fails" is where the real business risk lives.
The six failure modes in this guide are not theoretical. They are the incidents that actually happen in businesses under $50M in revenue: a hallucinated claim in a customer email, a confidential pricing doc pasted into ChatGPT by someone who did not know the difference between the consumer and business tiers, a wrong number in a financial summary, a biased screening output in hiring, a vendor outage mid-project, gradual quality decline nobody notices until a customer does. You do not need a sophisticated program to address these. You need one page that makes six decisions before the incident happens.
What a one-page AI incident runbook actually is
A runbook is a decision-already-made document. Its job is to eliminate the moment of "okay, what do we do now?" by answering that question before the situation arises.
A good one-page AI runbook has four columns for each failure mode:
- What it looks like (the signal that tells your team this is happening)
- First 60 minutes (the specific steps in the order they happen)
- Who gets notified (by role, not name, because people leave)
- Documentation required (what gets written down and where it lives)
That is the whole structure. Six rows, four columns, one page. The prompt scaffold in the quick-start section will generate a draft in about 10 minutes. The AI Advantage Audit at /audit can help you identify which failure modes are most likely given how your business currently uses AI.
Before you start
You need:
- 20 minutes of uninterrupted time
- A list of where your business currently uses AI (customer emails, internal documents, scheduling, quoting, content, hiring screening, financial summaries)
- Names and contact information for the three people who need to know first when something goes wrong (owner, operations lead, legal or compliance contact)
- Access to your team's AI tool accounts so you can check which tier they are on (consumer, pro, or business) and whether you have a signed Data Processing Agreement
One thing to settle before you fill in the template: if your team has ever pasted customer information, employee data, financial data, or proprietary business information into the consumer tier of any AI tool, you may already have a compliance gap. We have a dedicated section on this below. It is the most important section in this guide.
Failure Mode 1: A hallucinated claim reaches a customer
The failure pattern: AI writes a customer email, a quote, a product description, or a support response with a confident, specific, completely wrong fact. The price is wrong. The timeline is wrong. The spec does not exist. The policy was reversed two years ago. A team member sends it without reading it closely, because the output looked fine and they were busy.
This is the most common AI incident in small business. It happens because AI does not flag its own uncertainty, and most team members are not trained to treat AI output as a draft that requires verification.
What to ask AI for your runbook entry:
Draft the "Hallucinated Claim" row for a one-page AI incident runbook. Business type: [describe yours]. The row needs four entries: (1) what the incident looks like in the first 30 minutes, (2) the step-by-step response for the first 60 minutes including who pauses the workflow, who contacts the customer, and what documentation is created, (3) who gets notified by role (not name), (4) what gets written down and where. The response should not require legal or technical expertise to execute. Use plain language. The team member who has to execute this may be the office manager or a junior staffer.
The 60-minute response: identify every instance where the wrong claim went out. Pause the workflow until the root cause is identified. Contact the affected customer with a factual correction and, if applicable, an apology. Document the incident with a timestamp, the specific claim, how many customers were affected, and what was corrected.
The runbook entry should also include a prevention note: which categories of AI output always require human verification before leaving the building. Prices and timelines are the obvious ones. Add your specific categories.
Failure Mode 2: Confidential data enters a consumer-tier AI tool
The failure pattern: a team member needs to draft something fast. They paste a customer contract, an employee record, a pricing sheet, or a client's financial information into the free or consumer-tier version of an AI tool. They get the draft they needed. They do not think about where the data went.
In most consumer-tier AI tools, data submitted through the interface can be used for model training unless the user has explicitly opted out. Even tools that offer an opt-out require the user to actively configure it. Most team members have not done this. The confidential information is now somewhere it should not be.
What to ask AI for your runbook entry:
Draft the "Confidential Data Exposure" row for a one-page AI incident runbook. Business type: [describe yours]. Industries we serve: [list them]. The most sensitive categories of data our team handles: [name them, for example: customer personal information, employee records, pricing data, client contracts]. The four-column format: (1) how we identify this happened, (2) the 60-minute response, (3) who gets notified, (4) what gets documented. Include one sentence on what stops immediately when this is discovered.
The 60-minute response: identify exactly what data was submitted and to which tool. Check the tool's data retention policy and opt-out status. If the tool has a data deletion request process, start it. Notify whoever is responsible for data privacy in your business. Document the incident with a timestamp, the category of data, the tool used, and whether opt-out was active.
The prevention note for this row is the most operationally important one in the whole runbook. Name the approved tools and tiers for each category of data your team handles. If customer personal information should only go into the business-tier account, write that down. If employee records should never go into any AI tool, write that down.
Failure Mode 3: A wrong number appears in a deliverable
The failure pattern: AI generates a financial summary, a project estimate, a performance report, or a presentation. The numbers look reasonable. Nobody checks the underlying data against what AI produced. The deck goes to the client, the investor, the board, or the customer. The number is wrong, and it matters.
This failure mode is common in businesses that use AI for operational summaries, financial roll-ups, client-facing reporting, or any deliverable where numbers are the point. AI reads data, summarizes it, and occasionally makes arithmetic errors, misreads formatting, or applies a calculation the user did not intend.
What to ask AI for your runbook entry:
Draft the "Wrong Number in a Deliverable" row for a one-page AI incident runbook. Business type: [describe yours]. The deliverables where this is most likely: [list them, for example: client reports, financial summaries, project estimates, invoices]. The four-column format: (1) how this gets discovered, typically by whom, (2) the 60-minute response including who verifies the scope of the error and who contacts the recipient, (3) who gets notified, (4) what gets documented. Assume the team executing this is not technical.
The 60-minute response: identify the specific number that is wrong and check whether other outputs from the same workflow are also affected (one wrong number often signals a systemic issue). Contact the recipient before they act on the error. Provide the corrected figure with a brief explanation. Document what went wrong so the fix is made before the workflow runs again.
The prevention note: name the numerical categories that require human verification before any deliverable leaves the business. Revenue figures, project timelines, and pricing always qualify.
Failure Mode 4: A biased output in a regulated or sensitive context
The failure pattern: a team member uses AI to screen job applicants, draft performance reviews, evaluate customers for a program, or make recommendations in a context where bias has legal or ethical weight. The AI output reflects a pattern in its training data that disadvantages a protected class. The team member does not catch it because the output looks reasonable.
This failure mode is specific to businesses that use AI in hiring, promotion, customer selection, credit evaluation, tenant screening, or any workflow where the output affects people's opportunities. It is less common than the first three failure modes, but the regulatory exposure is higher.
What to ask AI for your runbook entry:
Draft the "Biased Output in a Regulated Context" row for a one-page AI incident runbook. Business type: [describe yours]. The specific AI-assisted workflows where this could occur: [list them]. The four-column format: (1) how a biased output would be identified, (2) the 60-minute response including who pauses the workflow and whether legal counsel should be contacted in the first hour, (3) who gets notified, (4) what gets documented. Include a note on whether this failure mode requires external notification.
The 60-minute response: pause the specific workflow immediately. Do not discard the output that triggered the concern. Contact the person responsible for HR or compliance. If the workflow produced an actual decision affecting a person (a hiring rejection, a tenant denial, a loan decision), call legal counsel now. Document the workflow, the specific output, and what decision was made based on it.
This is the failure mode where the prevention note matters most: which AI workflows in your business touch protected-class decisions, and is there a human review step in every one of them before an outcome is communicated?
Failure Mode 5: A vendor outage disrupts a time-sensitive deliverable
The failure pattern: your team has built a workflow around a specific AI tool. The tool goes down, is throttled, or changes its pricing model mid-project. The deliverable was due at the end of the week. There is no backup plan.
AI vendor outages happen. Major platforms have had outages during busy periods, often when the most users are relying on them. Pricing changes and feature changes happen without much warning. Businesses that have built core workflows on a single AI vendor are exposed to that vendor's operational decisions.
What to ask AI for your runbook entry:
Draft the "Vendor Outage or Disruption" row for a one-page AI incident runbook. Business type: [describe yours]. The AI tools we rely on most: [list them]. The deliverables most likely to be affected by an outage: [name them]. The four-column format: (1) how an outage is identified and distinguished from a temporary slowdown, (2) the 60-minute response including how clients or customers are notified of a delay if applicable, (3) who gets notified internally, (4) what gets documented. Include a line on the backup workflow if the primary tool is unavailable.
The 60-minute response: confirm the outage is vendor-side by checking the vendor's status page. Assess which in-progress deliverables are affected and what the deadline exposure is. If a client deliverable is at risk, contact the client with an updated timeline before they ask. Document the outage start time and what was affected.
The prevention note: which of your AI-dependent workflows could run with a different tool, and which cannot? The businesses that handle vendor outages best have already answered this question and keep a backup account at a second provider.
Failure Mode 6: Silent quality drift nobody noticed until a customer did
The failure pattern: AI output quality declines gradually. A vendor updates their model. A prompt that worked well six months ago produces slightly worse output now. The team is used to the output and stops scrutinizing it as carefully. The customer notices before the team does, either through a complaint, a returned deliverable, or quietly going elsewhere.
This is the hardest failure mode to detect because there is no moment of crisis. There is just a slow erosion of quality that produces no single incident but eventually shows up in customer feedback, retention rates, or reputation.
What to ask AI for your runbook entry:
Draft the "Silent Quality Drift" row for a one-page AI incident runbook. Business type: [describe yours]. The AI outputs we rely on most for customer-facing or business-critical work: [list them]. The four-column format: (1) what signals would indicate drift has occurred, including both internal signals like team observations and external signals like customer feedback, (2) the 60-minute response when a drift signal is confirmed, (3) who is responsible for ongoing monitoring, (4) what gets documented when drift is identified. Include a frequency for proactive spot-checks.
The 60-minute response when drift is confirmed: pull 10 to 20 recent outputs from the affected workflow and compare them against baselines from when the workflow launched. Identify whether the change is in the prompt, the model, or the input data. Check the vendor's changelog if the model changed. Pause the workflow if the quality gap is significant. Document what changed and when.
The prevention note: name who does the spot-checks, at what frequency (monthly is right for most SMBs), and where baseline samples are stored. If you have no baselines yet, create them this week.
The non-technical prompts that actually work for runbook building
Building a runbook is one of those tasks where AI is genuinely useful for the construction work, so the prompts matter.
Give AI your specific business context. "Small business" is not specific enough. "12-person accounting firm that uses AI for client emails, tax-season status updates, and document request follow-ups" is specific enough. The more specific the context, the more directly applicable the runbook entry.
Specify the skill level of the person executing. If the person who will run the 60-minute response is a junior staffer or an office manager without technical background, say that explicitly. AI will remove jargon and simplify steps when it knows the executor is non-technical.
Ask for plain language on the notifications question. The "who gets notified" column is where most runbooks fail. AI defaults to formal incident response language. Ask specifically: "Write the notification row as if you are telling a non-expert who to call and in what order. Include what to say in the first call."
Specify that the output must fit one page. AI will write as much as you let it. Constrain the output: "Each row should be no longer than 4 bullet points per column. Total document must fit on a single 8.5 x 11 page in 11-point font."
The small business compliance non-negotiables
This section is short because the rule is simple, but it is the most important section in this guide.
Do not put any of the following into the consumer tier of any AI tool:
- Customer names, addresses, phone numbers, email addresses, or any personally identifiable information
- Employee records, including performance reviews, compensation information, health-related information, or disciplinary records
- Financial data tied to a named customer, client, or employee (account numbers, balances, transaction histories)
- Proprietary business information that would harm the business if a competitor saw it (pricing models, client lists, unreleased products)
- Any information covered by a non-disclosure agreement or confidentiality clause
- Data subject to industry-specific regulation (HIPAA for health information, GLBA for financial information, FERPA for student records, CCPA/GDPR for EU or California residents)
The practical workflow: build the runbook and prompt templates using anonymized examples and placeholder language. Test workflows with fictional data before running them on real information. Move to a business-tier account with a signed Data Processing Agreement before any regulated data enters the workflow.
The compliance gap that catches most small businesses off guard is employee data in AI-assisted HR workflows. If any AI-assisted HR workflow touches protected-class information, it needs a human review step before any outcome is communicated.
If your business has signed a business-tier or enterprise agreement with a Data Processing Addendum, the rules on what you can process are different. Ask your lawyer what that agreement covers. Do not assume.
When NOT to use AI for incident response
The runbook is an AI-assisted document you build before an incident. The response itself is human-led. There are categories where AI should not be involved.
- Customer apologies for data or privacy incidents. A customer whose data was mishandled deserves a human apology from a named person. AI can draft a template, but a human rewrites it before it goes out.
- Legal notifications under regulatory frameworks. If an incident triggers a notification obligation under HIPAA, GDPR, CCPA, or state breach laws, legal counsel reviews the final language. AI does not draft it.
- Conversations with employees about a biased-output HR incident. If a biased output affected a hiring or promotion decision, that conversation is a human conversation.
- Communication with your insurer about a claim. Route insurer communication through your broker or counsel, not through AI-drafted text.
A simple rule: AI is an unfair advantage on the 80% of incident-related tasks that are structural (building the runbook, drafting internal templates, generating the documentation log). Trust human judgment for the 20% where the communication has legal, regulatory, or personal weight.
The quick-start template
Here is the prompt scaffold that generates a draft one-page runbook. Paste this into Claude or ChatGPT at the business tier, fill in the brackets, and you will have a working draft in about 10 minutes.
Build a one-page AI incident runbook for a [describe your business type and size]. We currently use AI for: [list your AI use cases]. The most sensitive data our team handles: [list data categories]. The team members most likely to execute an incident response: [describe their roles and technical level].
Format the runbook as a six-row table with these failure modes as row headers: Hallucinated Claim, Confidential Data Exposure, Wrong Number in Deliverable, Biased Output, Vendor Outage, Silent Quality Drift.
Columns: (1) What it looks like, (2) First 60 minutes - step by step, (3) Who gets notified (by role), (4) What gets documented.
Each cell: no more than 4 bullet points. Plain language. No jargon. The person executing this may not be technical. The whole table must fit on one 8.5 x 11 page.
At the bottom, add three lines: the approved AI tools and tiers for each data category, the name/role of the person who owns the runbook and updates it quarterly, and the date of the last review.
For recurring use, save the filled-in runbook in two places: a shared drive folder and a printed copy near the workstation where AI is used most. Add it to team onboarding materials. Review once per quarter and update the "last reviewed" date even if nothing changed.
Bigger wins beyond the one-page runbook
Once the runbook is built and the team knows where it lives, three follow-on investments compound the value significantly.
An AI acceptable-use policy that prevents incidents rather than just responding to them. The runbook is reactive. A one-page acceptable-use policy is proactive: it defines which tools are approved, which data categories can go into which tool, and what review steps are required before AI output reaches a customer. A 14-person firm can write one in an afternoon. The businesses that have one see far fewer incidents.
A quarterly spot-check log for quality drift. A simple log, 10 outputs per quarter reviewed against baselines, stored with the date and reviewer name, gives you an early warning system with no technical tooling required. One hour per quarter is the investment. The alternative is finding out from a client.
A vendor evaluation checklist built around your runbook. Every time your business considers adding a new AI tool, run it through the runbook: which failure modes does this tool introduce, does the vendor offer a DPA, what is their uptime track record? The runbook you built to respond to incidents doubles as a framework for evaluating whether a new tool should ever be adopted.
The small business AI consulting connection
This runbook is one tool in one category. The bigger AI question for small and mid-market businesses is structural: as more of your operations run through AI workflows, the gap between businesses that have thought through failure and businesses that have not will widen. The businesses that build even minimal incident-response infrastructure now, a one-page runbook, an acceptable-use policy, a quarterly review, are in a meaningfully stronger position than the businesses that have none of that when something goes wrong.
If you are wrestling with the bigger picture (which AI workflows are worth building, what the governance structure should look like, how to evaluate vendors without a technical team), the AI Consulting for Small Business page covers the full scope: where AI fits in SMB and mid-market operations, the common adoption failure modes, and what a consulting engagement looks like when it is scoped to a business your size.
Closing
The goal of the runbook is not to make your business AI-proof. It is to make sure that when something goes wrong, the first two hours go well. A business that responds to an AI incident with clear steps, quick customer communication, and accurate documentation comes out of it with less damage than a business that responds with improvisation and silence. The one-page format keeps this achievable. Most businesses skip incident planning entirely because the phrase sounds like it requires a security team and a six-figure consultant. It does not. It requires 30 minutes and the decisions this guide asked you to make.
Build the first draft of your runbook this week. Use the scaffold in the quick-start section, fill in the six rows, put it somewhere your team can find it in the first five minutes of an incident. That alone puts your business ahead of most. For the deeper picture of how AI fits into your business at the program level, the AI Consulting for Small Business page lays out the full scope and how an engagement works. The companion white paper The AI Incident Playbook goes into each failure mode in more detail if you want to go further after finishing the runbook.
Let's talk about your AI + SEO stack
If you'd rather skip the how-to and have it shipped for you, that's what I do. Start a conversation and we'll figure out the fastest path to results.
Let's Talk