The sales demo looked good. The trial worked well enough. Now a contract is sitting in your inbox and someone on the vendor's team is asking when you plan to sign. Most SMB buyers sign here because the tool passed the demo test and the pressure feels reasonable. That is the wrong moment to stop evaluating.
The demo tests whether the product works in ideal conditions with a trained salesperson at the wheel. It does not test what happens to your data after you upload it, whether you can leave if the product stops working, what happens when pricing changes, or whether support will pick up the phone at 2am when the tool breaks in the middle of a deadline.
This guide walks through a 20-minute pre-signature vendor vet. By the end, you will have a simple scorecard you can complete in one call with the vendor, plus the specific contract clauses worth negotiating before you sign. The task sections cover data and retention questions, model lock-in questions, the support test, pricing-change clauses, and the red-flag list. The companion white paper AI Vendor Red Flags goes deeper on contract language and multi-vendor comparison frameworks if you need more after this.
Why this matters for SMB buyers specifically
Enterprise buyers have legal teams who read every contract before signing and procurement officers who run structured vendor evaluations for months. SMBs usually have a founder or an operations manager who squeezes the evaluation into an afternoon between other things. The asymmetry is real: the vendor has sold this contract hundreds of times and knows exactly which clauses to minimize and which concerns to redirect. The SMB buyer is doing it once, or maybe twice, with limited time.
The result is that SMBs sign contracts with data retention clauses they never read, pricing-change provisions that allow triple-digit percentage increases, and lock-in structures that make switching expensive after 90 days of setup. None of these are necessarily dealbreakers, but all of them are negotiable if you catch them before signature. After signature, they are your problem.
What the vendor evaluation process actually covers
A vendor evaluation for an AI tool has four distinct layers: the data and privacy layer (what happens to your information), the product and dependency layer (what happens if you want to leave), the support layer (what happens when something breaks), and the commercial layer (what happens to the price). Most buyers only evaluate the product layer, which is the layer the vendor most wants you to focus on.
Think of the evaluation as a structured interview where you are the hiring manager. The vendor is the candidate. A good interview covers ability and character. Ability is the demo. Character is how they answer hard questions about data, support, and pricing under pressure.
Before you start, read the AI Vendor Red Flags companion paper, which has a full rubric for scoring vendor responses. The 20-minute vet in this guide covers the questions. The white paper covers the scoring.
Before you start
You need:
- The vendor contract or proposed order form, even a draft version
- 20 minutes scheduled with the vendor's sales contact or account manager, not a support rep
- A real use case in mind, one specific workflow you plan to use the tool for, including whether that workflow touches customer data, employee data, or proprietary business information
- A copy of the RFP Analyzer at /rfp-analyzer, which parses uploaded contracts and flags clauses by category
One thing to settle before you upload anything to the trial: the data question. Where your information goes after you put it into the system is the most important question in the evaluation. We have a dedicated section on this below. It is non-negotiable.
Task 1: Ask the data and retention questions
The failure pattern: most SMB buyers ask whether the platform is "secure" and accept a general affirmative as an answer. "Secure" is not a data retention policy. Security covers access controls. Retention covers what happens to your content, your prompts, and your uploaded files after you use them.
What to ask the vendor instead:
I need clear answers to three questions before I sign. First: does your platform train its models on the content or prompts my team submits? Second: how long is my data retained after I delete it or cancel, and where in writing can I find that policy? Third: do you offer a Data Processing Agreement, and will you send me a copy to review before we proceed?
The three questions are the floor. A vendor who cannot answer all three clearly in writing, not just verbally in a call, is a vendor whose data practices you do not understand well enough to sign with. The DPA question is especially important if your team will use the tool for anything involving customer names, emails, financial records, or health information. The DPA is the legal document that defines the vendor's responsibilities around your data. Without a signed DPA, their marketing claims about data privacy are unenforceable. For the tool version you use during a trial: assume consumer-tier tools train on your inputs unless the contract says otherwise. Most enterprise and business tiers have an explicit no-training clause. Get it in writing.
For vendors who answer all three questions well: ask one more. "What happens to my data if I cancel or if your company is acquired?" This surfaces retention periods post-cancellation and data portability on acquisition, both of which matter if the vendor gets bought by a competitor.
Task 2: Ask the model lock-in questions
The failure pattern: buyers evaluate the platform as it exists today and do not think about what switching costs look like in 18 months. AI platforms change fast. The model that impressed you in Q1 may be deprecated by Q3. The feature set that fits your workflow today may be gone in a pricing restructure by the following year. Lock-in is the structural problem that makes those changes expensive.
What to ask the vendor instead:
Three questions about switching and continuity. First: if I want to export everything I have built in your platform (workflows, templates, prompt libraries, trained models), what does that look like, and what format does it come out in? Second: if your platform is shut down or significantly changed, what is my notice period and what exit support do I get? Third: does your pricing tier limit which underlying AI models I can access, and who controls which model my team uses?
The export question reveals whether your investment is portable. A vendor who cannot give a specific answer about export format is a vendor whose platform is designed to retain you by making exit painful. The model access question matters because several platforms take a markup on underlying model access (typically OpenAI, Anthropic, or Google models) and do not let you switch to a better or cheaper model without upgrading your tier. If the vendor is reselling someone else's model, you should know that, because it affects both the cost structure and what happens if their relationship with the underlying provider changes.
For the contract itself, look for these terms: "minimum commitment period," "auto-renewal notice window," and "early termination fee." All three can turn a month-to-month trial into a 12-month commitment faster than most buyers realize.
Task 3: Run the support 2am test
The failure pattern: buyers evaluate support by the quality of the sales experience. Sales people are good at sales. They are not representative of what a support ticket looks like at 10pm when your team is on a deadline and the platform is returning errors.
What to ask the vendor instead, during the evaluation call:
Describe the worst-case support scenario: my team is using your platform on a deadline, something breaks, and it is outside normal business hours. Walk me through exactly what the escalation path looks like and what response time guarantees I have in my tier. Then tell me one specific incident from the last 12 months where a customer had a platform outage and what resolution looked like.
The incident question is the important one. Sales contacts who can give you a specific, honest account of a recent outage and how it was handled are working at a company that takes support seriously. Sales contacts who deflect to "our uptime is 99.9%" without addressing the incident question are avoiding something. Follow up with: "Can you point me to your status page and the incident history?" Credible vendors have public status pages with incident logs. Vendors without them either have not had incidents (unlikely) or do not publish them (more likely, and a problem).
For any platform your team will use daily: the support tier matters as much as the feature tier. A platform that charges $200/month per seat but includes 4-hour response time SLAs is often better value than a platform that charges $80/month but puts support tickets in a queue with 48-hour response windows. Get the SLA in writing, not in the sales pitch.
Task 4: Read the pricing-change clauses
The failure pattern: buyers negotiate the starting price and do not read the clauses governing how that price can change mid-contract or at renewal. Several major AI platforms have restructured pricing mid-cycle in the last 24 months, with some customers seeing cost increases of 40% to 80% on renewal without meaningful notice.
What to ask the vendor, and what to look for in the contract:
What is the mechanism for price changes mid-contract and at renewal? Specifically: can you change the price of my current tier with notice, and if so, what is the minimum notice period? And what are the terms I have at renewal, is the renewal price locked to my current rate or priced at whatever the then-current rate is?
In the contract, search for the words: "pricing," "rates," "changes," "modification," and "renewal." The clause you are looking for usually reads something like: "Company reserves the right to change pricing upon 30 days notice" or "Renewal rates subject to then-current pricing." Both of those clauses mean the vendor can increase your price significantly. What you want instead: "Pricing during the current term is fixed as set forth in the Order Form" and "Renewal pricing will not exceed [X]% above the current term rate."
For annual contracts above $5,000: push for a price cap on renewal. Vendors will often agree to a 5% to 10% annual cap if asked directly. Most buyers do not ask, which is why most vendors do not offer it. The cap negotiation is worth 20 minutes of a legal or ops call.
Task 5: Build the red-flag list and scorecard
The failure pattern: buyers accumulate vendor responses across multiple calls and emails and then try to compare them from memory. The vendor with the best salesperson wins, not the vendor with the best answers. A simple scorecard removes that bias.
What to ask AI to build for you:
I am evaluating [Vendor Name] for [specific use case]. Here are the answers they gave to my five evaluation questions: [paste vendor responses]. Score each answer on a scale of 1 to 3: 1 = clear and verifiable answer, 2 = partial or hedged answer, 3 = deflection or no answer. Then flag any answers that match the following red-flag patterns: refuses to share a DPA, cannot describe data retention post-cancellation, no public status page or incident history, pricing change clause allows changes with less than 60 days notice, no export path for proprietary assets built in the platform.
The red-flag list is cumulative. One red flag is a caution. Two red flags are a problem. Three or more red flags on different questions means the vendor is not ready for business use at the level of commitment you are considering. The RFP Analyzer at /rfp-analyzer does a faster version of this for uploaded contracts, surfacing flagged clauses by category without you having to search through the document manually.
For the scorecard output: store it in whatever you use for vendor tracking (a shared spreadsheet, a project management card, a Notion database). If you are comparing multiple vendors, run the same five questions with each one and compare scores side by side. The vendor with the highest scorecard almost always performs better over a 12-month contract than the vendor who ran the best demo.
The SMB-specific prompts that actually work
After running vendor evaluations with SMB clients across a range of industries, four prompt moves produce consistently better output than generic queries.
Specify the actual use case. Generic AI evaluation prompts produce generic answers. "Evaluate this AI vendor" gets you a list. "I am a 12-person services firm using this tool to generate client proposals and manage email follow-up sequences. Score this vendor for that specific use case" gets you something actionable. The use case narrows which risks matter and which the vendor's claims address.
Specify the contract section you need summarized. Contract review prompts work best when you name the section. "Summarize the data retention and deletion section of this contract in plain English and flag any clauses that give the vendor discretion over my data post-cancellation" produces a useful answer. "Review this contract" produces a summary that misses the one clause that matters.
Specify the comparison format. If you are comparing vendors, ask AI to structure the output as a side-by-side table with one row per evaluation criterion. Narrative comparison is hard to act on. A table with red/yellow/green cells is easy to show a co-founder, a board member, or anyone else whose sign-off you need.
Specify the red-flag threshold. When asking AI to flag concerns, name the threshold explicitly. "Flag any clause where the vendor retains discretion to change terms with less than 60 days notice" produces a findable answer. "Flag any concerning clauses" produces a list that includes minor formatting issues alongside material risks. The threshold narrows the output to what you actually need to act on.
The general compliance non-negotiables for SMB buyers
This section is short because the rule is simple, but it is the most important section in this guide.
Do not put any of the following into the consumer tier of any AI platform:
- Customer PII: names, emails, phone numbers, addresses, purchase history, account numbers
- Employee records, performance reviews, compensation data, or anything covered by state employment privacy laws
- Proprietary business information: pricing models, client lists, unreleased product plans, trade secrets
- Financial records: P&L statements, bank account detail, tax return data
- Any information subject to a client confidentiality agreement or non-disclosure agreement you have signed
- Health information of any kind, for customers, employees, or anyone else
The practical workflow that respects these rules: build your templates, prompt libraries, and workflow scaffolding on the consumer tier using anonymized examples. Test the platform's capabilities with placeholder data, "Client X," "Vendor Y," "Product Z." Move to the Business or Enterprise tier, with a signed DPA, before any real business data enters the system. The tier upgrade pays for itself the first time a customer asks where their data goes.
State privacy laws are catching up to the AI space faster than most SMB owners realize. California, Colorado, Texas, and Virginia all have active consumer privacy regulations that treat AI platforms as data processors when they handle consumer information. The DPA is what makes that relationship defensible under those laws.
If your vendor has signed a Business or Enterprise agreement with a Data Processing Addendum specific to your account, the rules on what data can flow through the system are different. Ask your legal counsel or IT lead what is covered. Do not assume the Business tier automatically covers every data type. Read the DPA.
When NOT to use an AI vendor evaluation tool
The evaluation framework in this guide covers the questions. It does not cover every situation where the questions are not enough.
Skip the AI-assisted evaluation, or slow down significantly, for:
- Any contract where personal injury or product liability is a risk. If the AI tool is involved in safety-critical decisions (medical, legal, engineering, construction), the evaluation needs a subject-matter expert, not a checklist.
- Regulated data environments without legal counsel sign-off. HIPAA, GLBA, FERPA, SOC 2, PCI-DSS: if your environment is subject to any of these, the DPA review needs a lawyer who knows that regulatory frame, not a founder with a contract template.
- Contracts above $25,000 annually where the legal review cost is trivially small relative to the exposure. At this spend level, a legal review of the full contract pays for itself on the first clause the lawyer catches.
- Any vendor who refuses the DPA conversation or claims a DPA is not necessary. That is not a vendor who takes data protection seriously. That is a vendor who wants to keep their data handling options open. Stop the evaluation and move to the next candidate.
A simple rule: the vendor evaluation scorecard in this guide is an unfair advantage on the 80% of AI vendor decisions where the stakes are manageable and the data is non-regulated. Trust legal counsel for the 20% where the contract has regulatory, financial, or safety weight that exceeds what a structured checklist can surface.
The quick-start evaluation template
Here is the prompt scaffold that works for most SMB vendor evaluations. Copy it, fill in the brackets, paste it into your preferred AI assistant before the vendor call.
I am evaluating [Vendor Name] for [specific use case: e.g., AI-generated proposals for a 10-person services firm]. I will be uploading customer information including [describe data types: e.g., client company names, project scopes, pricing discussions]. Before I sign, I need to evaluate them on five dimensions: data retention and DPA availability, model lock-in and exit path, support quality and SLA commitments, pricing-change mechanisms in the contract, and red flags in the contract terms.
Help me prepare the five questions I should ask the vendor directly, calibrated to my use case. Then, after I share the vendor's answers, score each on a 1-3 scale and flag any answers that match these red-flag patterns: [list the patterns from Task 5].
Also flag any language in the following contract sections that gives the vendor unilateral discretion over data, pricing, or terms: [paste contract sections].
That is the scaffold. Adapt the use case and data types to your situation. Run the same scaffold against every vendor you are evaluating so the comparison stays apples-to-apples.
For recurring vendor reviews (annual renewals, new tool evaluations): store the scaffold and the previous scorecard in your vendor management system. The second evaluation is faster than the first because you have a baseline to compare against.
Bigger wins beyond the pre-signature vet
Once you have a signed contract with a vendor who passed the evaluation, the evaluation framework becomes a long-term asset in three ways.
A vendor scorecard library that compounds over evaluations. Every AI tool your business evaluates generates a scorecard. After three or four evaluations, you have a comparison baseline that makes the next one faster. You also have documented reasoning for why you chose the vendor you did, which matters if a team member or board member asks six months later why a different vendor was not considered.
A mid-contract review trigger that catches pricing and capability changes early. Set a calendar reminder at the 6-month mark of any annual contract to re-run the data retention and pricing-change questions with the vendor. AI platforms change their terms more frequently than most enterprise software vendors. A mid-contract review catches changes before you are surprised at renewal. If the vendor has materially changed their data practices or pricing structure, the review gives you time to negotiate or plan an exit before the renewal commitment.
An internal AI use policy that protects the business as the toolset grows. The data non-negotiables section above is the seed of an internal AI use policy. After you have signed two or three AI vendor contracts, formalize those non-negotiables into a written policy: which data categories can and cannot go into AI tools, which tier is required before regulated data can flow through a platform, and who has authority to sign new AI vendor contracts. A written policy is also the document your professional liability carrier and your customers will ask for as AI use becomes standard practice in small business operations.
A negotiation starting point for the next contract. Every clause you negotiate on the current contract is a template for the next one. If you successfully negotiated a pricing cap, a data deletion timeline, or a DPA amendment, save the specific language. The next vendor will see a buyer who knows what they are asking for, which changes the negotiation dynamic in your favor.
The small-business AI consulting connection
Vendor vetting is one task inside a larger AI adoption question every SMB is navigating right now: which tools are worth the spend, which workflows change with AI in the picture, and what does a defensible, cost-effective AI stack look like for a business at your stage. Getting the pre-signature evaluation right is important, but it is the front door, not the whole building.
The structural question, which categories of your business operations benefit from AI, which carry real risk if implemented wrong, and how to sequence adoption without disrupting what already works, is what AI Consulting for Small Business covers. That page lays out the full picture: where AI creates real operational value for SMBs, the adoption failure modes I see repeatedly, and what a consulting engagement looks like when it is scoped correctly for a small business budget.
Closing
The 20 minutes you spend on this evaluation before signing is not due diligence for its own sake. It is the difference between a vendor relationship that works for 24 months and one that turns into a renegotiation or a migration project 6 months in. The businesses I work with that skip the evaluation almost always spend more time and money fixing the contract terms after signature than they would have spent reading them before.
Run the five questions with your current vendor candidate this week. Use the RFP Analyzer at /rfp-analyzer to surface the clauses that need attention before the call. Then bring the scorecard results to the vendor conversation rather than relying on memory.
If you want to think through how AI fits into your business at the program level, not just the vendor-selection layer, AI Consulting for Small Business lays out the full picture and how an engagement works.
Let's talk about your AI + SEO stack
If you'd rather skip the how-to and have it shipped for you, that's what I do. Start a conversation and we'll figure out the fastest path to results.
Let's Talk