An AI agency typically sells packaged solutions with opaque pricing and ongoing service contracts, while an AI consultant helps you evaluate, build, or buy tools with transparent cost breakdowns. The markup difference is stark: agencies often charge $15,000-$50,000 for "custom AI integrations" that are actually $200/month no-code tools with a branded interface. Custom AI means work specifically architected for your data structures and workflows, not off-the-shelf software with your logo on it. Here's how to tell the difference before you sign a contract.
AI Agency vs AI Consultant: What's the Real Difference?
The business model tells you everything. Agencies make money by selling solutions, which creates pressure to package and resell the same tools repeatedly with customization theater. Consultants make money by solving problems, which aligns their incentive with finding the right answer, even if that's "buy this $50/month SaaS tool and configure it yourself."
Agencies often employ account managers and salespeople who don't do the technical work. Consultants typically have the person scoping your project also doing the implementation. This isn't a value judgment, it's a structural reality that affects pricing.
The average agency engagement includes a 40-60% margin on top of actual development costs, plus recurring revenue from hosting, maintenance, and "platform fees" that are often just repackaged third-party subscriptions. A consultant usually bills hourly or project-based with transparent pass-through costs for any tools or APIs.
What Does Custom AI Actually Mean?
Custom AI work means someone wrote code or configured systems specifically for your data model, business logic, and integration requirements. It doesn't mean you got a unique solution. It means the solution was built to fit your specific context.
Here's what qualifies as custom: a Python script that pulls data from your ERP system, transforms it according to your accounting rules, sends it to an LLM API with a prompt template you can edit, and writes the results back to your database. Here's what doesn't: a Zapier workflow that connects your Gmail to ChatGPT using pre-built connectors.
The test is transferability. If you can't take the work and run it yourself (or hire someone else to maintain it), it's not custom. It's a service dependency. Real custom work produces artifacts you own: code repositories, documentation, API credentials in your name, system architecture you control.
Roughly 70% of "custom AI solutions" sold to mid-market companies are actually configuration of existing platforms. That configuration has value, but it shouldn't cost the same as writing software from scratch.
The $15K Integration That's Actually a $200 Workflow
This is the most common markup pattern. An agency demos a slick interface that connects your CRM to an AI assistant. They quote $15,000 for "custom integration development" plus $500/month hosting. You sign because you don't have technical staff to build it.
What you actually got: a Make.com or Zapier workflow using pre-built connectors, a $30/month ChatGPT API account, and a simple frontend built with a no-code tool like Softr or Bubble. Total actual cost to recreate: $200-400/month in subscriptions, maybe 8-12 hours of configuration work.
The root cause isn't malice. It's misaligned incentives plus client technical illiteracy. The agency knows you can't audit their work, and they've packaged their configuration knowledge as development. They're not lying about doing work, they're misrepresenting the nature of that work.
The tell: ask to see the underlying architecture or a data flow diagram. If they show you a flowchart with app logos connected by arrows, that's a workflow automation tool. If they show you code, database schemas, or API endpoint documentation, that's actual development. The difference matters when you're paying development prices.
Questions to Ask Before Signing
- "Can you show me the data flow diagram and explain which parts are custom code vs. third-party platforms?"
- "If your company disappeared tomorrow, what would I need to keep this running?"
- "Can I export my workflows and run them on my own infrastructure?"
- "What happens to my data if I stop paying the monthly fee?"
The Proprietary AI Engine That's Just an API Wrapper
Agencies love talking about their "proprietary AI engine" or "custom-trained model." In 95% of mid-market cases, this is an OpenAI or Anthropic API call with a prompt wrapper and maybe some basic retrieval-augmented generation.
There's nothing wrong with using third-party LLMs. That's usually the right technical choice. The problem is charging custom model development prices for API integration work. Training an actual custom model costs $50,000-$500,000+ depending on scale. Calling someone else's API costs $0.50-$5 per thousand requests.
The root cause is conflating configuration with development. Writing a good prompt template and building a retrieval system requires skill, but it's not machine learning engineering. It's application development using ML as a component.
The tell: ask what training data was used, what the model architecture is, or whether it runs without internet access. A real custom model has training datasets, model weights stored somewhere, and can run offline if needed. An API wrapper stops working when the internet goes down or the vendor changes their pricing.
Technical Questions That Expose API Wrappers
- "What model architecture are you using, and where are the weights hosted?"
- "Can you show me the training data or explain the fine-tuning process?"
- "Does this work offline, or does it require internet access?"
- "What happens if OpenAI/Anthropic raises prices or deprecates the model you're using?"
Monthly SaaS Costs Billed as One-Time Development Fees
This markup pattern hides ongoing subscription costs inside "development" fees. You pay $25,000 for a "custom AI solution buildout," but buried in the contract is $800/month for "hosting and platform fees" that turns out to be $200/month in actual tool costs plus 75% margin.
The math works out badly for you. Over three years, you'll pay $25,000 + ($800 × 36) = $53,800 for something that could have cost $200/month = $7,200 in direct tool subscriptions plus maybe $5,000-10,000 in actual setup work.
The root cause is opacity in cost structure plus bundling. Agencies present a single price without separating one-time labor from recurring platform costs. This makes it impossible to comparison shop or understand what you're actually paying for.
The tell: request a line-item breakdown separating development labor hours, third-party platform subscriptions, API usage costs, and agency margin. Any vendor who won't provide this breakdown is hiding something. Honest vendors will show you exactly where your money goes.
The Demo That Only Shows UI, Never the Backend
Watch what agencies show you during demos. If they only demonstrate the user interface and avoid showing how data moves between systems, that's a red flag. The "custom" work is probably just a branded frontend on commodity infrastructure.
Real custom integrations have interesting backends: data transformation logic, error handling, API orchestration, database schemas designed for your specific use case. Agencies proud of their technical work will show you these components. Agencies hiding commodity tools will keep the demo surface-level.
The root cause is that the actual value is in the UI/UX design and branding, not the technical implementation. That's fine for some use cases, but you shouldn't pay software development rates for design work.
The tell: request to see how your data moves between systems, what happens when a third-party API is offline, or a walkthrough of the admin/configuration panel. If they're reluctant or the backend is just a settings page for a white-labeled SaaS tool, you're paying custom prices for commodity infrastructure.
What Actually Custom AI Work Looks Like at Mid-Market Budgets
Real custom AI work at the $20,000-$100,000 budget level includes specific technical artifacts you can verify. Here's what you should expect to receive.
Real Data Integration, Not Just CSV Uploads
Custom work connects directly to your systems via APIs or database connections. You should see code that authenticates with your ERP, CRM, or data warehouse using credentials you control. CSV upload interfaces are fine for prototypes, but production systems need automated data pipelines.
A properly integrated system handles incremental updates, not full data refreshes. If the solution requires you to manually export and upload files regularly, it's not really integrated.
Workflow Rewiring That Changes Internal Processes
Custom AI implementations change how your team works, which means the vendor needs to understand your current processes deeply. If they didn't spend significant time mapping your workflows before proposing a solution, they're selling a pre-built product.
You should receive documentation showing before/after process flows, training materials for your team, and clear handoff procedures. The implementation should reduce manual work by at least 30-50 hours per month to justify typical mid-market pricing.
Capability Transfer So Your Team Owns the Logic
The best custom work includes knowledge transfer. You should receive code repositories, architecture documentation, and training so your team (or a future vendor) can maintain and modify the system. Vendors who refuse to provide this are building dependency, not solutions.
Ask for read access to the code repository during the sales process. If they won't commit to providing it, walk away. You're paying for custom development, you should own the output.
Transparent Cost Breakdown
A legitimate custom AI project shows you exactly what you're paying for: labor hours by role and rate, third-party platform subscriptions, API usage estimates, hosting costs. The breakdown should separate one-time costs from recurring expenses.
For context, typical mid-market custom AI projects run 100-300 development hours at $150-300/hour depending on complexity, plus $200-2,000/month in ongoing platform and API costs. If the numbers don't roughly align with this, ask why.
How to Vet AI Vendors Before You Sign
Start with technical due diligence, even if you're not technical yourself. Request a technical architecture document before signing any contract. It should show system components, data flows, third-party dependencies, and integration points in detail.
Ask for client references who have similar technical complexity, not just similar company size. Talk to their technical staff, not just the executive who signed the contract. Ask what surprised them during implementation and what they wish they'd known upfront.
Request a proof-of-concept with your actual data before committing to a full buildout. A two-week paid pilot project costing $5,000-$10,000 will reveal whether the vendor can actually deliver custom work or just configure existing tools. This is standard practice for purchases above $30,000.
Review the contract for lock-in provisions. You should be able to terminate with 30-60 days notice and receive all code, documentation, and data in usable formats. Contracts that don't include clear exit terms are designed to trap you in ongoing service fees.
For more guidance on what to watch for during vendor conversations, see AI vendor demo red flags for mid-market companies. The patterns are consistent across vendors.
AI Agency Pricing Transparency: What Fair Markup Looks Like
Some markup is reasonable. Agencies have overhead, they've invested in developing reusable components, and they take on project risk. A 30-50% margin on services is standard in professional services. The problem is 300-500% margins presented as custom development.
Fair pricing shows you the underlying costs. If they're using a $500/month platform, they should tell you that and charge $750-1,000/month including their margin and support. If they're calling the same platform a "$25,000 custom buildout," that's not margin. That's misrepresentation.
Transparent vendors will explain their pricing model upfront: "We charge $200/hour for development, we estimate 120 hours for your project, and you'll have ongoing costs of approximately $400/month for platforms and APIs." Opaque vendors give you a single number and resist breaking it down.
The mid-market AI consulting cost for legitimate custom work typically ranges from $30,000-$150,000 depending on complexity, as detailed in what AI consulting actually costs mid-market companies. Quotes significantly below this range are probably using mostly pre-built tools, which is fine if priced accordingly.
Build vs Buy: When Off-the-Shelf Tools Are the Right Answer
Look, sometimes the rebadged SaaS tool is actually the right solution. You just shouldn't pay custom development prices for it. Many mid-market AI use cases are well-served by existing platforms like Intercom's AI agent, HubSpot's AI tools, or specialized vertical SaaS.
The decision framework is simple: if your process matches what the tool was built for, buy the tool directly. If your process requires significant modification to fit the tool, that's technical debt you're taking on. If the tool can't handle your process at all, you need custom development.
A good consultant will tell you when to buy instead of build. An agency with a pre-built solution to sell will find reasons why their platform is the perfect fit, even when it isn't. This incentive difference is why the consultant model often serves mid-market buyers better for initial evaluation.
For companies trying to start small, AI pilot projects can help you test both build and buy approaches before committing significant budget.
The AI vendor market is full of markup patterns that exploit technical knowledge gaps. Your defense is asking specific questions about architecture, data flows, cost breakdowns, and ownership before signing contracts. Real custom work produces artifacts you can inspect, modify, and transfer. Rebadged commodity tools produce dependency on the vendor's ongoing service. Know which one you're buying, and pay accordingly. The vendors who resist transparency are the ones charging custom prices for commodity work.
AI Vendor Red Flags: A Field Guide for Non-Technical Buyers
Eleven red flags that tell you an AI vendor is going to waste your money. Direct, no diplomacy, written for owners who don't have time to learn the hard way.
Read the white paper →Get a free AI-powered SEO audit of your site
We'll crawl your site, benchmark your local pack, and hand you a prioritized fix list in minutes. No call required.
Run my free audit