It is a Friday afternoon at a 120-person professional services firm. A senior account manager is finishing a proposal for a $400K client. She pastes the client's internal financial projections into ChatGPT to help sharpen the executive summary. She is not being reckless. She has seen colleagues do the same thing for months. Nobody ever said she couldn't. The proposal goes out. The client later discovers their proprietary data was processed by an external AI model. The firm loses the account and spends four months managing the fallout. When the CEO asks HR what the company's AI policy said about this, the answer is that there is a 34-page document on the intranet that three people have read.
That is the failure mode this paper is written to prevent. Not the dramatic breach, not the rogue model, not the science-fiction scenario. The quiet, well-intentioned mistake that happens when good employees operate in a policy vacuum. A usable AI acceptable use policy is not a legal instrument designed to absorb liability. It is a decision tool designed to answer one question employees ask ten times a day: "Can I use AI for this?" If they have to search a SharePoint folder to find a 34-page document to answer it, the answer is always going to be yes by default.
This paper names the seven sections that make an AI acceptable use policy work for a company between 50 and 300 people, and the three clauses almost every policy skips that create the real exposure. No compliance software. No law firm billing hours. Deploy it in a week.
1. Why most AI policies are theater
A Fortune 500 company can afford a 30-page AI policy because it has a legal team to write it, a compliance function to enforce it, a training team to certify it, and an HR team to adjudicate edge cases. A 150-person company has a general counsel on retainer at $400 an hour and a two-person HR department running payroll and recruiting simultaneously. When those companies borrow enterprise AI policies, they get documents written to be defensible in litigation, not read on a Tuesday morning by someone who wants to use Claude to summarize a vendor contract. The result: employees ignore the policy, follow it so strictly they get no value from AI, or guess wrong in ways the company discovers later.
The companies deploying AI well at this size keep their policies short, specific, and opinionated. Named tools. Concrete examples. Clear instructions for situations not explicitly covered. Stored somewhere employees actually look, not in a policy management system requiring a separate login. The goal of the framework below is a document that fits on two pages in normal font and answers "can I use AI for this?" without a call to legal.
2. Section 1: Approved tools (and the version trap)
Every AI policy needs an explicit named list of approved tools. Not a general statement that "approved AI tools may be used for business purposes." An actual list: ChatGPT (Pro tier, company-managed account), Claude (Teams account, not personal), Copilot for Microsoft 365 (enterprise license only). Employees should not have to guess whether the AI feature built into their project management tool counts as approved.
The version trap is where most approved-tool lists fail within six months. A company approves "ChatGPT" without specifying account type. Six months later, employees are using free personal accounts, which have different data retention policies than the Teams tier that legal actually reviewed. Or they are using browser extensions that pipe prompts through a third-party aggregator the company never evaluated. The approval is meaningless if it does not specify the account tier, the access method, and who owns the license.
Approved AI tools as of [date]: ChatGPT (Teams account only, accessed via company-provisioned login), Claude.ai (Teams plan, company account), Microsoft Copilot (via Microsoft 365 enterprise license). Personal accounts on approved platforms are not approved for business use. AI features embedded in third-party tools are approved only when those tools have already been approved by IT. Any tool not on this list requires IT and legal sign-off before use. This list is reviewed quarterly. Check [specific URL] for the current version.
The quarterly review date matters. An approved-tool list two years old is not a policy, it is a history lesson. Assign someone to review it every 90 days and update the version date whether or not anything changes. If employees are using unapproved tools because the approved ones are too limited, the fix is not stricter enforcement. Add the right tool or explain the limitation. Employees route around rules that feel arbitrary.
3. Section 2: Approved use cases
Listing approved tools without listing approved use cases leaves the hardest question unanswered. Employees do not want to know what tools they can use in the abstract. They want to know what they can do with them.
The list does not have to be exhaustive. It has to be representative enough that employees can pattern-match their actual work against it. A workable list for a professional services or B2B company covers: drafting and editing internal documents, summarizing meeting transcripts or research, generating first drafts of external-facing copy for human review, analyzing data sets that contain no client-identifiable information, writing and debugging code, answering general research questions, and automating repetitive formatting or classification tasks.
Approved use cases include: drafting and editing internal documents and communications, summarizing research and meeting notes, generating first drafts of marketing copy, proposals, and internal reports where a human reviews and approves the final output, analyzing internal data sets free of personally identifiable or confidential client data, code generation and debugging, and general research not involving confidential business information. This list is illustrative, not exhaustive. When in doubt, apply the data classification rules in Section 4 and escalate if unsure.
The phrase "where a human reviews and approves the final output" for anything going outside the company is the most important clause in this section. If a client gets a deliverable with an error, the company is responsible. AI is a drafting tool for external work, not a publishing tool.
4. Section 3: Prohibited use cases
The prohibited list works best when it names specific categories with enough concrete example that an employee can self-identify whether their situation qualifies. Six prohibitions matter for most mid-market companies.
- Presenting AI-generated claims as independently verified when they are not. AI models hallucinate citations, statistics, and case precedents. Any AI-generated claim in a legal filing, board report, regulatory submission, or client deliverable must be verified against a primary source by a human who can vouch for it.
- Making consequential decisions about individuals without human review. Performance evaluations, compensation changes, employment decisions, and credit assessments cannot be delegated to an AI model without human sign-off.
- Feeding restricted data to an AI tool in violation of the data classification rules. This overlaps intentionally: violating data classification via AI is a specific policy offense, not just a classification error.
- Generating communications that misrepresent the sender's identity or role. Drafting an email "as if from" a senior executive without that executive's knowledge and approval is prohibited regardless of whether the executive reviews it before sending.
- Reproducing third-party content in ways that likely violate copyright. Employees should not use AI tools to reproduce substantial portions of books, articles, competitor materials, or licensed content.
- Using personal accounts on any AI platform for business content. This belongs in both the approved-tools list and the prohibited list because it is important enough to name twice.
5. Section 4: Data classification, what can touch which tool
This is the most operationally important section in the policy, and the one most often written in a way nobody can apply in real time. The goal is a simple matrix: data categories the company handles, mapped to which AI tools can touch them.
Four tiers work for most companies at this size.
Public information: anything already published or freely available. Blog posts, public pricing, general industry research. Can touch any approved AI tool without restriction.
Internal-only information: operational data the company uses internally but that is not confidential. Meeting notes on non-sensitive topics, general project timelines, internal process documentation. Can touch approved AI tools on company-managed accounts. Should not go into personal accounts or unapproved platforms.
Confidential information: material the company treats as proprietary. Client contracts, pricing models, employee compensation, acquisition targets, unreleased product plans, internal legal analysis. No external AI tools. If an employee needs AI assistance with confidential material, the policy should name any approved on-premises or private-deployment tool available. If none, the answer is "manual only for this category."
Regulated information: anything covered by a compliance obligation. HIPAA-protected health data, FERPA student records, PII under GDPR or CCPA, payment card data under PCI DSS, financial data under SOX. Same restriction as confidential information, plus explicit confirmation from legal that any tool used has the necessary data processing agreements in place. Most external AI platforms have BAAs available. Most companies have not signed them.
Data classification and AI tool permissions: Public information (approved tools, any account tier). Internal-only information (approved tools, company-managed accounts only). Confidential information (no external AI tools; approved internal tools only if explicitly designated). Regulated information (no AI tools unless legal has confirmed a valid data processing agreement is in place for that specific tool and data category). When unsure what tier your data falls into, treat it as confidential until you have confirmation otherwise.
Default-to-restrictive on uncertainty is a deployable rule. "Use your judgment" is not. One thing most policies miss: AI output carries the same classification as the input that generated it. A summary of a confidential document is confidential. Employees should not paste that summary into a less-restricted context just because the AI produced it.
6. Section 5: Disclosure obligations
This section answers three questions: when do employees have to tell clients AI was used, when do they tell colleagues, and when is no disclosure required?
On client disclosure, the standard is materiality: if AI contributed to a deliverable in a way a reasonable client would consider material to evaluating the work, disclose it. Practical test: "Would the client pay the same price if they knew this was AI-generated with human review?" If uncertain, disclose or ask a manager. AI used for grammar and formatting does not require disclosure. AI that generated the first draft of a substantive analysis, where the framing shaped the final output, probably does when the client is paying for expert judgment. For internal deliverables going to senior stakeholders, note AI involvement in the document header so the reviewer knows what kind of scrutiny the output needs.
Disclosure requirements: For external deliverables, disclose AI involvement when the AI's contribution was substantive to the analysis or recommendation, not merely editorial. When uncertain, disclose or escalate to your manager before delivery. For internal documents going to leadership, note AI involvement in the document header. For routine internal use (drafting emails, summarizing notes, formatting documents), no disclosure is required. Client contracts may include specific AI disclosure requirements that override this general policy. Check the engagement terms before using AI on any client project.
That last sentence covers a gap most policies miss. Enterprise clients are increasingly inserting AI restriction and disclosure clauses into vendor agreements. If the company has signed an agreement prohibiting AI tools on an engagement, the internal policy does not override the contract. Someone needs to track which client agreements have AI-specific terms and flag them before project kickoff.
7. Section 6: Human-in-the-loop requirements
"Human-in-the-loop" gets cited in AI governance without ever being defined in a way an ops director can enforce. Define it: any AI output that triggers a consequential action must have a human review it before the action occurs. Name what "consequential" means specifically enough that employees can self-apply it. For most mid-market companies, the requirement covers four categories: client-facing documents (proposals, contracts, formal deliverables, billing communications, dispute escalations), financial commitments (any AI-generated analysis informing a purchase, budget approval, or pricing change requires actual verification of the underlying data, not just a senior approval on something unread), employment decisions (AI can help screen resumes, but the human who evaluated the candidate makes the call on who advances or who goes on a performance plan), and automated pipelines (any workflow where AI output triggers downstream actions without per-instance human review must be documented, assigned a named owner, and monitored).
Human-in-the-loop requirements: The following categories require explicit human review before any action is taken on AI output. Client-facing documents: proposals, contracts, formal deliverables, billing communications, dispute escalations. Financial commitments: any AI-generated analysis informing a purchase, contract, or budget decision above $5,000. Employment decisions: any AI output informing a hiring, promotion, or performance management decision. Automated AI pipelines: any workflow where AI output triggers an automated downstream action must be documented and assigned a named owner. Routine internal use does not require formal review, but the human acting on AI output remains responsible for its accuracy.
8. Section 7: The escalation path
"When in doubt, ask your manager" is not an escalation path. It pushes uncertainty one level up without giving that manager any better tools to resolve it. A real escalation path has three elements: a named contact or role, a response time commitment, and a documented record of the decision. At a 50-to-300 person company without a dedicated compliance function: day-to-day use-case questions go to the team manager with a one-business-day expectation; unresolved questions escalate to whoever owns the AI policy, typically the COO, CTO, or a designated operations lead; anything involving regulated data, a client contract clause, or potential legal exposure goes directly to legal before proceeding.
Escalation path: For questions about whether a specific use case is permitted, contact your direct manager. If unresolved within one business day, escalate to [designated AI policy owner role, name]. For questions involving client contracts with AI clauses, regulated data categories, or potential legal exposure, escalate directly to [legal contact] before proceeding. All escalations and decisions should be logged in [specific location]. Document the date, the question asked, and the answer given.
The escalation section should also address what happens when an employee discovers a violation after the fact. A policy that encourages self-reporting and treats good-faith mistakes differently from willful violations surfaces problems when they are still manageable. A policy that punishes disclosure equally with intent produces a culture of concealment. Name the difference explicitly.
9. The 3 clauses everyone omits
Clause 1: The AI output verification requirement for factual claims.
AI models produce confident-sounding text that is sometimes specifically wrong: fabricated citations, incorrect statistics, misrepresented precedents, outdated regulatory references presented as current. Most policies say "employees are responsible for AI output," which is true but tells nobody what "responsible" means operationally. The fix: any statistic, citation, regulatory reference, or legal precedent in an external deliverable must be verified against a primary source before the document is finalized, and the employee who finalizes it must be able to name the source.
AI output verification: Any factual claim in an external deliverable, including statistics, citations, legal or regulatory references, and attributed statements, must be independently verified against a primary source before the document is finalized. The employee who finalizes the document is responsible for that verification and must retain the source. "The AI said so" is not a sufficient defense for an inaccurate claim in a client deliverable or a regulatory submission.
Clause 2: The contract review trigger.
Client and vendor agreements increasingly include AI-specific clauses: restrictions on tool use, disclosure requirements, requirements for licensed-professional review of AI outputs, or outright prohibitions on feeding client data to AI systems. These can supersede the company's internal policy in ways individual employees will never know about unless someone tells them. The fix is a simple check before any new engagement or agreement: the account manager or project lead confirms whether the contract contains AI-related terms and flags it to legal if it does. Ten minutes per contract. The alternative is discovering mid-engagement that the company has been violating an obligation it did not know existed.
Contract review trigger: Before beginning work on any new client engagement or executing any new vendor agreement, the relevant account manager or project lead must confirm whether the agreement contains terms related to AI tool use, AI disclosure, or data processing restrictions. Any agreement containing such terms must be flagged to [legal contact] before AI tools are used on that engagement. A log of flagged agreements and approved use parameters will be maintained by [designated owner].
Clause 3: The policy version and acknowledgment requirement.
Most AI policies are published once, distributed by email, and never touched again. Six months later the policy is outdated, new employees have never seen it, and nobody can confirm who read the current version. "We had a policy" is a weak position. "We had a policy and here are timestamps showing which employees acknowledged it and when" is a defensible one. The fix: the document carries a version number and date, every material revision requires a new acknowledgment from all employees, and the acknowledgment log lives in HR records. The last sentence of the sample below is the one that changes leadership behavior, creating an incentive to communicate updates because the alternative is a weaker position when something goes wrong.
Policy version and acknowledgment: This policy is version [X.X], effective [date]. All employees must acknowledge this policy upon hire and whenever a material revision is issued. A new version will be issued at minimum annually or whenever a significant change in approved tools, data classification, or prohibited uses occurs. The acknowledgment log, including employee name, version acknowledged, and date, is maintained in [HR system]. Acting against a policy version an employee has acknowledged may be treated as a policy violation. Acting against guidance that was never communicated may not.
10. What to do this week
Start with a single afternoon and a blank document. Write the approved-tools list first because it forces the conversation you need to have with IT about which accounts are actually company-managed versus personal. Then write the data classification matrix. Those two sections together answer the question causing the most daily confusion: "Can I paste this into that?"
If you already have an AI policy, run it against the three missing clauses. Does it have a specific verification requirement for factual claims in external deliverables, not just a general "employees are responsible" line? Does it include a contract review trigger? Does it carry version control with an acknowledgment log? If any of the three are absent, add them before the next distribution. Each is one paragraph. Each closes real exposure. Have outside counsel review the data classification section and the three clauses before publishing. Not a full policy audit, just a 30-minute call confirming the tiers align with your industry's specific obligations. That conversation costs $200 and closes the gaps that cost $200,000 later.
If you want a structured view of where your current AI program stands before you write or revise the policy, the AI Advantage Audit is the readiness diagnostic built for companies at this size. It surfaces the operational gaps your policy needs to address and the workflows where you are already exposed, in about 20 minutes, with a prioritized list of what to fix first.
If you know roughly where you want to go and need help structuring the engagement, the Scope Sketcher walks through what a policy buildout looks like at three engagement tiers, with honest estimates on what each level requires from your team.
And if you want to talk through your specific situation with someone who has built these frameworks inside mid-market companies before, the contact page is the right next step. Bring your employee count, your industry, and a rough sense of what AI tools are already in use. The conversation takes 30 minutes and usually surfaces the two or three gaps that matter most before anything else.
Two pages. Seven sections. Three clauses. The employees who read it will know what to do. The employees who do not will be acting against a known policy rather than in a vacuum you created. That distinction matters when something eventually goes wrong, and something eventually will.
