AI Vendor Red Flags: A Field Guide for Non-Technical Buyers

You don't need to be technical to spot a bad AI vendor. You need a list of patterns. Over 25 years of working with 500+ businesses, I've watched the same red flags show up again and again, and the companies that ignored them paid in months of wasted budget and internal trust. I'm Jake McCluskey, and this is the field guide I wish every owner had before their next vendor call. There's no diplomacy here, because diplomacy is what got a lot of my clients into bad contracts in the first place.
None of these red flags on their own means the vendor is a scam. Good vendors occasionally trip one. But when a vendor trips three or more, walk. You're looking at a pattern, not a personality quirk.
What are the biggest AI vendor red flags to watch for?
The biggest AI vendor red flags cluster around three themes: opacity about the product, pressure on the sales process, and unwillingness to be measured. If a vendor is vague, pushy, and allergic to accountability, you've seen enough. Close the tab.
Here are eleven specific patterns I see most often, any three of which should end the conversation.
- No real customer references you can call.
- Opaque or "custom" pricing that never produces a written quote.
- Locked-in annual contracts with no out clause.
- "Proprietary" models that are actually just wrapped ChatGPT or Claude.
- Sales-heavy demo with no proof of concept on your data.
- Vague data policy, or a policy that uses phrases like "we may use".
- Unclear ownership of the outputs the tool generates.
- Resistance to a security review or SOC 2 question.
- Urgent closing tactics, fake discount deadlines, and "we only take five clients a quarter".
- Heavy reliance on buzzwords, light on measurable outcomes.
- Founders or executives nowhere to be found, only sales reps.
I'll walk through the ones that do the most damage.
Why is opaque pricing a red flag for AI vendors?
Opaque pricing is a red flag because it tells you the vendor is optimizing for what they can extract, not what you're worth. A serious tool with a serious product has a pricing page. The ones that hide behind "let's get on a call to discuss investment" are usually adjusting the number based on how much money they think you have.
I've watched the same tool quote $18,000 annually to one client and $52,000 to another, same seats, same features, same timeline. The only difference was that the second client was a bigger company. That's not pricing. That's hunting.
Get the number in writing before the second call. If they won't give you a range without a meeting, they're training you to invest time so you feel committed. That's not a strategy you want on the other side of the table.
How do you spot wrapped ChatGPT sold as proprietary AI?
You spot wrapped ChatGPT by asking three questions: what model powers your tool, what happens if OpenAI changes their pricing, and what's proprietary about your layer. If the vendor stumbles on any of these, their "proprietary AI" is probably a prompt template sitting on top of someone else's model.
There's nothing inherently wrong with building on GPT-4 or Claude. A lot of great tools do. The red flag is pretending you didn't, while charging a premium for the pretense. A vendor who says, "we use GPT-4 under the hood with our own fine-tuning and a proprietary orchestration layer that does X and Y," is being honest. A vendor who says, "we've built our own proprietary AI from the ground up," and then can't explain what that means technically in three sentences, is lying or confused. Neither is the vendor you want.
Ask flat out: "which foundation models do you use, and which parts of the product are your own work?" The answer tells you what you're really paying for.
Why are locked-in contracts a problem in AI?
Locked-in contracts are a problem in AI specifically because the category is moving too fast to commit a year to any single tool. The right tool in January might be second-best by June. An annual contract with no out clause punishes you for the vendor's lack of pace.
I push hard for month-to-month, or at minimum, a 90-day out clause on any annual deal. Good vendors will work with you on this. The ones who won't are telling you that their retention is propped up by contracts, not by product. That's a tell you should take seriously.
If the vendor insists on 12 months with no out, the discount should be real, meaning 25 to 40 percent below the monthly equivalent. Anything less and the math doesn't justify the risk.
What does it mean when a vendor resists a security review?
When a vendor resists a security review, it usually means they haven't done the work. Real vendors have a security page, a SOC 2 or ISO 27001 status, and a named security contact. If getting basic answers from them is a three-week chase, that's the experience you'll have every time there's an actual problem.
It doesn't matter if you're a 12-person company without a formal IT department. You should still ask. The questions I ask every vendor:
- Where does my data sit, which cloud, which region?
- Is it encrypted at rest and in transit?
- Who on your team has access to my data, and under what conditions?
- Are you SOC 2 Type II, or pursuing it?
- What's your breach notification timeline?
A vendor who can answer these in 48 hours, in writing, is a vendor you can trust. A vendor who goes quiet, sends you marketing PDFs instead of answers, or says "don't worry, we handle all of that," is telling you exactly how seriously they take your data.
Why are urgent closing tactics a red flag?
Urgent closing tactics are a red flag because good tools don't need manufactured urgency. If the tool solves your problem, it will still solve your problem on Monday. Discounts that expire in 48 hours, "only five slots left this quarter," and "my VP won't approve this pricing after Friday" are all versions of the same game.
The move is simple. Tell the rep, politely, that you don't make purchase decisions under pressure. If the discount is real, they'll honor it when you're ready. If the discount disappears the moment you push back, you just learned the real price, and you didn't pay a dime to find out.
This is also where you spot a vendor's culture. High-pressure sales cultures tend to produce high-pressure customer success cultures. You don't want either one touching your business.
Why does a missing founder matter in AI vendor selection?
A missing founder matters because in early-stage AI, the founder is usually the one with the judgment and technical depth to make the product work. If every conversation goes through a sales rep who punts technical questions to "our team," the founder has either checked out or is focused somewhere other than your deal.
I'm not saying the founder needs to be on every call. I'm saying that when you ask a detailed question about the product's architecture, security model, or roadmap, you should be able to get an answer from someone who actually knows, within a week. If you can't, the company doesn't have the depth you need, or they don't think you're worth the founder's time. Either is a reason to move on.
One specific test: ask to speak with the product lead or a technical co-founder for 20 minutes before signing. If the answer is "we don't typically do that," you're about to buy a relationship with a sales org, not a product org.
What are the smaller red flags I shouldn't ignore?
The smaller red flags are easy to wave off individually but powerful when you see three or more together. Watch for case studies that are vague on numbers, testimonials from people you can't find on LinkedIn, a support portal that requires three clicks to find a real human, and pricing that jumps unpredictably between the marketing page and the quote.
Also pay attention to how a vendor handles the word "no." When you push back on a term, the healthy response is a counter-proposal or a clean walkaway. The unhealthy response is escalation through their chain, multiple follow-up emails from different reps, and sudden discovery of new discount tiers. Vendors who grind you into a yes will also grind you into a renewal you don't want.
Trust your gut on the conversation quality too. If you leave a vendor call feeling confused about what the product actually does, that's not your fault. Good vendors make the product clear in 30 minutes. If you need four calls just to understand the offering, the offering is probably a mess.
How many red flags is too many in a single vendor?
Three is the line. One red flag is a bad day. Two is a pattern worth asking about. Three is a pattern you don't want to find out more about on your dime. The cost of walking away is a missed opportunity. The cost of signing with three red flags is usually 6 to 12 months of wasted budget and team attention.
There are great AI vendors out there. I work with some of them. The good ones welcome hard questions, price transparently, don't manufacture urgency, and treat your data like it matters. When you find one of those, sign quickly and build the relationship. The bad ones burn themselves out running this playbook on smarter buyers each year, so the filter gets easier over time.
If you're staring at a vendor right now and can't tell whether the flags are yellow or red, I'd rather have a 20-minute conversation than watch you sign something you'll regret. You can book a discovery call, or start with a free audit of the tools you're already paying for to see which ones are earning their keep. Either way, get clear before you commit.