Most business owners I talk to have the same situation: the team is already using AI every day, nobody has written down the rules, and the owner finds out there is a problem only after something goes wrong. A customer email that should not have been written that way. A vendor proposal with a confidentiality clause that got pasted into ChatGPT. A junior employee who generated a legal-sounding document and sent it without review. By the time the founder hears about it, the risk has already materialized.
An AI Acceptable Use Policy does not prevent your team from using AI. It tells them exactly how to use it without creating problems you will have to clean up later.
This guide walks through five sections of a two-page AUP your team can actually follow, with sample policy language you can copy and adapt this afternoon. By the time you finish reading, you will have the draft structure, the specific language for each section, and the rollout steps to make sure the policy lands. The whole writing session takes about 30 minutes. The optional companion deep-dive is the The Mid-Market AI Acceptable Use Policy white paper if you want the full governance architecture for larger teams.
Why this matters for small business owners specifically
Large enterprises have compliance departments, legal teams, and vendor management offices that field AI governance questions as a job function. SMBs and mid-market companies have the same risks with a fraction of the infrastructure. When a 20-person services firm has no AI policy, the exposure is not theoretical. Customer data gets pasted into consumer AI tools. Proprietary pricing models get input into chat interfaces. AI-generated content goes out to clients without a human checking whether the facts are correct.
The stakes are different from enterprise, too. For a 20-person firm, one data incident can damage a client relationship that represents 15% of revenue. One AI-generated document with a factual error can create a professional liability question the firm is not equipped to defend. The AUP is not bureaucratic paperwork. It is the minimum structure that keeps the team's AI use working for the business instead of against it.
What an AI Acceptable Use Policy actually does
An AUP is a short internal policy that tells employees which AI tools they can use, what they can use them for, what they cannot put into them, and what happens when something goes wrong. Two pages is the right length for most SMBs. Anything longer and employees stop reading.
Three things separate a good AUP from a generic "AI guidelines" document:
- It names specific tools. "AI" is not a category employees can act on. "Claude.ai Business tier and Microsoft Copilot" is.
- It sets a data rule employees can apply in five seconds. The classification question should not require a judgment call at the moment of use.
- It creates a path for questions. Ambiguity without an escalation path means employees guess, and guesses are where problems start.
Think of it as an employee handbook section, not a legal document.
Before you start
You need:
- 30 minutes and a quiet place to write
- A working document (Google Doc, Word, or Notion page works fine)
- A list of the AI tools your team currently uses, even informally. Ask around if you are not sure. The audit takes five minutes: send a one-question Slack or email asking which AI tools people use for work.
- One person who will be the policy owner going forward. That person is responsible for updating the policy when tools change and fielding questions about what the policy allows.
One thing to settle before you use an AI tool to help draft this policy: the compliance non-negotiables section below applies to you right now, not just to your employees. Use anonymized descriptions of your business. Do not paste actual customer names, contracts, or internal documents into consumer-tier AI.
If you want to assess your company's overall AI readiness before writing the policy, the AI Advantage Audit is a good starting point.
Policy Section 1: Approved Tools
The failure pattern here is writing a vague list. "Employees may use AI tools for productivity" does not tell anyone anything. It does not distinguish between tools with data processing addendums and tools without them. It does not address what tier of a given tool is acceptable. Employees make their own calls, and the calls are not consistent.
What to write instead:
Approved AI Tools
The following AI tools are approved for use in company work as of [effective date]. Use of any AI tool not on this list requires prior written approval from [name or role of policy owner].
Approved tools:
- [Tool name] at the [Free / Pro / Business] tier, for [category of use, e.g., drafting internal documents, generating marketing copy, summarizing meeting notes]
- [Tool name] at the [tier], for [category of use]
Personal AI accounts (accounts registered to an employee's personal email) may not be used for company work. All AI tools used for company work must be registered to the employee's company email address or to a company-managed account.
The approved tool list will be reviewed quarterly. Employees who want to propose adding a new tool should submit a request to [policy owner] with the tool name, the intended use case, and the vendor's data terms link.
The quarterly review matters because AI tools change their terms, add features, change ownership, and alter data handling practices. A tool safe in January may have different terms by July. The review is a 15-minute check, not a project.
On the tier question: consumer free tiers at most AI vendors do not have zero-data-retention guarantees, and some use conversation data for training unless the user opts out. Business and Enterprise tiers offer data processing addendums and zero retention. The practical rule: free tier for tasks with no sensitive data, Business tier for anything that does.
Policy Section 2: Approved Use Cases
The failure pattern: listing use cases so broadly that the section is meaningless. "Employees may use AI for work tasks" covers everything and prohibits nothing. The approved use list should name the specific task categories your business actually uses AI for.
What to write instead:
Approved Use Cases
AI tools approved above may be used for the following categories of work:
- Drafting internal documents: memos, meeting summaries, procedure drafts, and templates
- Drafting external communications for human review before sending: client emails, marketing copy, proposal sections, website content
- Research assistance: summarizing publicly available information, generating outlines, brainstorming options
- Data analysis on non-sensitive data: summarizing spreadsheets or reports that do not contain customer PII, employee records, or financial data classified as sensitive under Section 4 of this policy
- Administrative tasks: scheduling suggestions, proofreading, formatting assistance
All AI-generated content that goes to a customer, client, or third party must be reviewed and approved by a human employee before it is sent. AI-generated content may not be sent to any external party without human review. This review requirement applies regardless of how confident the employee is in the AI output.
The human-review requirement is the single most important rule in this section. AI tools hallucinate. They invent citations, misstate facts, and produce plausible-sounding but incorrect content. One AI-generated email with a factual error creates a client service problem. The review step is not optional.
If your team uses AI for coding tasks, add a parallel line: "AI-generated code must be reviewed by a qualified developer before deployment to any production system."
Policy Section 3: Prohibited Use Cases
The failure pattern: listing prohibited uses in such vague terms that employees cannot apply them in practice. "Do not use AI for sensitive matters" is not actionable. "Do not input personally identifiable customer information into any AI tool at the Free or Pro tier" is.
What to write instead:
Prohibited Use Cases
The following uses of AI tools are prohibited regardless of which tool or tier is used, unless a specific exception has been granted in writing by [policy owner]:
- Inputting customer PII (names combined with contact information, financial data, health information, or any data governed by a privacy law or client contract) into any AI tool without a signed Data Processing Addendum in place between the company and the AI vendor
- Generating or reproducing any content that infringes a third party's copyright, trademark, or trade secret
- Using AI to produce content that is materially deceptive, including fabricated customer reviews, fabricated testimonials, or false claims about products or services
- Using AI to draft any legal document that will be relied upon without attorney review, including contracts, demand letters, legal notices, or compliance certifications
- Using AI to make or support employment decisions, including screening resumes, generating performance review language, or drafting disciplinary communications, without HR or management review
- Sharing the company's proprietary pricing models, trade secrets, or confidential business strategy in any AI tool
Violations of this section will be treated as a conduct issue and may result in disciplinary action up to and including termination, depending on the severity and circumstances.
The legal document rule is the one that surprises employees most. AI-generated contracts sound professional and can still be legally defective. The prohibition is not about AI being bad at legal language. It is about the review step AI cannot do: verifying that the document is enforceable in your state and that it says what the business actually intends.
For the employment decision rule: AI bias in hiring is an active area of regulatory attention. New York City, Illinois, and other jurisdictions have rules that apply to employers using AI in the hiring process. The blanket prohibition on employment decisions without HR review keeps the company on the right side of those rules without requiring employees to know which rules apply where.
Policy Section 4: Data Classification
Most SMBs do not have a formal data classification system. They do not need a complex one. They need a two-tier rule employees can apply in five seconds.
What to write instead:
Data Classification and AI Input Rules
Before inputting any company data into an AI tool, employees must determine which tier the data falls into.
Sensitive data (requires Business-tier AI with signed DPA, or no AI input at all):
- Customer or prospect names combined with email, phone, address, or financial information
- Employee records, including performance data, compensation, and HR correspondence
- Contracts, NDAs, and legal agreements
- Financial data identifying specific customers, clients, or employees
- Any data subject to a confidentiality obligation under a client contract
- Any data classified as protected under HIPAA, CCPA, GDPR, or other applicable law
Non-sensitive data (approved for use in any approved AI tool including free tiers):
- Publicly available information
- Internal documents with no customer or employee PII
- Generic business templates, process drafts, or framework documents
- Anonymized data where all identifying information has been removed
When in doubt, treat data as sensitive. The policy owner can advise on specific cases.
The "when in doubt, treat as sensitive" rule is load-bearing. Employees will encounter edge cases the policy did not anticipate. A blanket default to the more protective option protects the business without requiring every borderline case to go through an approval process.
The compliance frame for small business is general hygiene: data protection, IP, and employment law are the three rails. HIPAA applies only if you are a covered entity. SEC rules apply only if you are a registered investment adviser. But the data rules above map onto CCPA obligations for California businesses, state consumer protection laws for most others, and the confidentiality obligations in most client agreements. If your firm has any client under NDA, you are almost certainly already committed to protections that extend to how you handle their information. The AI input rule above is how you keep that commitment.
Policy Section 5: Disclosure and Escalation
Most AUPs end with prohibited uses and leave employees with no guidance on what to do when something goes wrong. That gap is where incidents compound. An employee who accidentally pastes the wrong data into an AI tool and does not know whether to say something will often say nothing, hoping it resolves itself. Silence makes the exposure worse.
What to write instead:
Disclosure and Escalation
Employees must report the following to [policy owner] within 24 hours:
- Any accidental input of sensitive data into an unapproved AI tool or an approved tool at the wrong tier
- Any AI-generated output sent to an external party that the employee believes was inaccurate or misleading
- Any request from a client, vendor, or third party about the company's AI use that the employee cannot answer confidently
- Any AI output that may constitute legal advice, medical advice, financial advice, or a professional opinion the company is not qualified to give
When in doubt, stop and ask. Do not try to correct an AI-related problem without first reporting it. The policy owner will determine whether disclosure to an external party is required and what remediation steps apply.
Questions about what this policy allows should go to [policy owner name or role]. There is no penalty for asking before acting. There may be significant consequences for acting and asking later.
The company will not retaliate against any employee for good-faith reporting of an AI-related concern. Retaliation against employees for reporting concerns is itself a conduct violation.
The no-retaliation clause matters. Employees who believe reporting a mistake will get them disciplined will say nothing. The 24-hour window creates urgency without being punitive. The "when in doubt, stop and ask" default gives employees a clear action in any situation the policy did not anticipate. And for the policy owner: repeat incidents in the same category are a training signal, not just a compliance problem.
The policy-writing prompts that actually work
Once you have the structure above, an AI tool can help you refine the language for your business's specific situation. Four prompt moves that make the output usable rather than generic.
Specify your actual business type. "A 15-person marketing agency with B2B clients under NDAs" produces a more relevant draft than "a small business." The AI has no idea what your confidentiality obligations look like unless you tell it.
Specify the constraint that matters most. For most SMBs, that constraint is either the client confidentiality obligation (services firms) or the employee data rule (any business with HR functions). Name it explicitly in the prompt. "Employees often have access to client financial data. The policy must create a clear rule about what can and cannot go into AI tools that employees will understand without a legal background."
Specify the audience for the final document. "Write this so a 25-year-old employee with no legal background can apply the rules without asking a manager every time" produces plain-language output. "Write this for the company's employment attorney to review" produces a different tone that is appropriate for a different purpose. Name the audience.
Specify what stays static and what can vary. The prohibited use cases section is close to standard and should not be diluted. The approved tools list will change every quarter. Telling AI which sections are meant to be durable versus which are meant to be updated guides how it structures the language in each.
The small business compliance non-negotiables
This section is short because the rule is simple, but it is the most important section in this guide.
Do not put any of the following into the consumer tier of any AI tool:
- Customer names paired with contact information, financial data, or purchase history
- Employee records, salary information, performance documentation, or HR correspondence
- Contracts, NDAs, or any agreement with a confidentiality clause
- Proprietary pricing, margin data, or business strategy documents
- Any data your company is obligated to protect under a client agreement or applicable law
- Personal health information, even if you are not a covered entity under HIPAA (state laws may still apply)
- Information that would give a competitor meaningful insight into your business if it were made public
The practical workflow that respects these rules: use free-tier AI tools for generic, anonymized, or publicly available content only. Use a Business-tier account with a signed Data Processing Addendum for any task that touches actual customer or employee data. The DPA is what makes the Business tier defensible: it contractually restricts the vendor from training on your data and commits to handling standards your client agreements may require.
For IP: AI-generated content is based on training data that may include copyrighted material. The legal landscape on AI output and copyright ownership is still unsettled. For commercially valuable content, like proprietary methodologies or technical frameworks, have an attorney review before it goes into a client deliverable.
If your company has signed a Business or Enterprise AI agreement with a Data Processing Addendum, the rules on data handling are different from what applies to consumer accounts. Ask your IT lead or the agreement signatory what the DPA actually covers. Do not assume.
When NOT to use AI for your AUP or policy work
AI is not the right tool for every part of this process.
- Any section that will be used as a legal commitment. AI-drafted language in an employment agreement, a client contract, or a regulatory filing requires attorney review before it is binding. Do not treat AI-generated policy language as legally reviewed just because it sounds professional.
- Jurisdiction-specific employment law questions. AI tools have training cutoffs and do not know your state's current employment law. New AI-specific regulations are passing at the state level faster than AI training data can keep up. Ask an employment attorney in your operating states, not an AI tool, when the question is whether a specific rule applies.
- Data breach or incident response. If a data incident has already occurred, the response is governed by state breach notification laws, your cyber insurance policy, and potentially federal law. AI cannot tell you what you are obligated to do. Your attorney and your insurance carrier can.
- Any policy section that affects termination. The language around consequences for policy violations, including anything that mentions termination, needs employment counsel review in your jurisdiction. At-will employment is not universal, and the policy language you use can affect what the company can and cannot do.
A simple rule: AI is a good first-draft tool for the plain-English sections explaining how employees should behave. Trust attorneys and regulators for the sections that carry legal weight.
The quick-start template
Here is the prompt scaffold to use with an approved AI tool at the Business tier if you want help drafting sections of your AUP. Copy it, fill in the brackets, and paste it in.
Draft a section of an AI Acceptable Use Policy for [type of business, e.g., a 20-person B2B marketing agency] covering [policy section: Approved Tools / Approved Use Cases / Prohibited Use Cases / Data Classification / Disclosure and Escalation].
The audience for this document is employees with no legal background. Language should be plain, direct, and specific enough that an employee can apply the rule without asking a manager in most situations.
Key facts about our business: [one to three sentences about your industry, your clients, and any specific compliance obligations you know about].
The constraint that matters most for this section: [e.g., we handle client financial data under NDA / we operate in California and have CCPA obligations / we are scaling hiring and want to use AI in the process].
Do not use legal jargon. Do not make this sound like a law firm template. Sound like a real business writing a real policy.
Store this scaffold in your operations documentation. When the policy needs a new section (and it will), start here and adapt. The 30-minute draft is the foundation, not the final product.
Bigger wins beyond the initial draft
Once the AUP is written and deployed, the next layer of value is in how you use it.
A quarterly review process that keeps the policy current. Set a 90-day calendar event for the policy owner to review the approved tools list, check for regulatory changes in operating states, and update any DPA status. Fifteen minutes when done regularly. Four hours when done reactively.
A policy acknowledgment system that creates an HR record. A signed acknowledgment via DocuSign or Google Forms creates a dated record the employee received and agreed to the policy. That record matters in a dispute and when a client or regulator asks whether your team has AI governance in place.
A training supplement that makes the data classification rule stick. Data classification is the section employees get wrong most often. Build a one-page cheat sheet: always safe for any tier, Business-tier only, never input into AI. Saved as a phone wallpaper or printed on a desk card, it eliminates most edge-case questions before they become incidents.
A vendor review process for new AI tools. Employees will keep finding new tools. Build a two-question intake: what does the tool do with input data, and does the vendor offer a DPA. The policy owner can approve most requests in 10 minutes. Without the process, approvals are informal and impossible to audit.
For a full treatment of AI governance at the team and department level, including how to build an AI council, run a vendor evaluation process, and structure AI policy for mid-market organizations with multiple departments, see the The Mid-Market AI Acceptable Use Policy white paper.
The small business AI consulting connection
A two-page AUP is one governance document in one category. The bigger AI question for SMBs and mid-market companies is structural: which workflows across the business are ready for AI, which need a human process redesign first, and which carry enough compliance or brand risk that AI involvement needs active management. Most owners do not have the time to audit every workflow and the AI tools market is moving fast enough that a tool list from six months ago is already out of date.
Getting that picture right is exactly what an AI consulting engagement does for small and mid-market businesses. The AI Consulting for Small Business page covers the most common entry points, the compliance questions that come up in every SMB engagement, and what working with Elite AI Advantage looks like at the scope and budget most small business owners are actually working with.
Closing
The businesses that get value from AI are the ones that use it consistently, and consistency requires knowing the rules. A team that is unsure what the policy allows will default to asking before acting, or more often, to guessing. The two-page AUP you build this afternoon removes the guessing. It tells employees which tools to use, what to use them for, what to keep out, and who to call when something goes sideways.
Write the first draft today. Get it reviewed by employment counsel before you deploy it. Run the all-hands walkthrough next week. Then schedule the 90-day review and move on. The policy is not the work. It is the structure that lets the work happen without unnecessary risk.
If you want to think through how AI fits into your business at a broader level, the AI Consulting for Small Business page lays out the full picture and how an engagement works.
Let's talk about your AI + SEO stack
If you'd rather skip the how-to and have it shipped for you, that's what I do. Start a conversation and we'll figure out the fastest path to results.
Let's Talk