AI in 90 Days: What Mid-Market Companies Should Actually Do About AI Right Now

Executive summary
A McKinsey survey of 1,491 organizations published in March 2025 found that 78% are now using AI in at least one business function — yet only 21% have meaningfully redesigned a workflow around it, and just 1% describe their AI rollouts as "mature." [1] In other words: nearly four out of five mid-market companies have made an AI move, and four out of five of those moves haven't produced anything you could put on a board slide.
The companies winning right now aren't the ones spending the most or hiring fastest. They're the ones who skipped two specific traps, started small with a buy-before-build instinct, and treated AI as a layer across the business instead of a department inside it. This paper is the 90-day plan I give to every mid-market client who asks me where to start. It is built on the assumption that you already lost the "wait and see" window — that was 2023 — and the question now is execution sequence.
Why this matters now, not next year
There is a five-year-window argument and a six-month-window argument for AI adoption. They reach the same conclusion from opposite directions.
The five-year argument: this is the largest productivity shift since the internet, and the companies that figure out how to use it will compound the advantage. Goldman Sachs estimates generative AI could lift global GDP by roughly 7% over a decade. [2] You don't want to be the company that figured out email in 2003.
The six-month argument is sharper and the one I see hurt mid-market companies more often. Your competitors are running AI-augmented sales teams right now. Their reps are sending personalized outreach at 4× the volume of yours. Their support team is closing tickets in half the time. Their content team is publishing weekly instead of quarterly. None of this is theoretical. I've seen it inside companies in your category.
The cost of waiting isn't that AI gets harder to adopt. It's that your competitors finish the learning curve first and use the gap to take your customers. A 2024 BCG analysis of 1,400 executives found AI leaders are growing revenue 1.5× faster than peers and pulling ahead at an accelerating rate. [3] That gap doesn't close on its own.
The three traps
Almost every mid-market AI rollout I've cleaned up failed in one of three specific ways. None of them are about technology. All three are about decisions you make in the first 30 days.
Trap 1: Hiring the AI person before you know the AI work
The instinct, when AI feels urgent, is to hire someone to "own AI." So you post a job for an AI Lead at $180–250k base, you spend three months filling it, and then this person spends another three months trying to figure out what they were hired to do. By month nine you've spent roughly $200k and shipped two pilot projects that nobody uses.
The pattern that works: pick the first three workflows you want AI in before you hire anyone. Write them down. Now you can either bring in a fractional consultant for 90 days to deliver them, or hire a real person against a real job description instead of a vague one. Either path is faster and cheaper than hiring first and scoping later.
A mid-market financial services client of mine hired an AI Director in early 2024. By the end of Q2 2025, they had three slide decks and zero deployed automations. They paused the role, brought us in for a 60-day sprint, and shipped two production workflows in nine weeks for less than two months of that director's salary.
Trap 2: Building before buying
Engineers like to build. Founders like to build. If you have technical leadership in the room, the default conversation will be about what you can build with the OpenAI or Anthropic API. Resist this for at least 60 days.
The reason is not that custom is bad. The reason is that 80% of the AI use cases that move your numbers are already shippable as off-the-shelf SaaS at a tenth the cost. Sales engagement, customer support, meeting notes, document processing, internal search — these all have mature commercial tools you can wire in this month. Custom only earns its keep when the workflow is a genuine differentiator for your business and the off-the-shelf tools don't cover it.
A useful rule: if you can describe the use case to a vendor and the vendor says "yes, we do that," buy it. Save your custom budget for the workflow that doesn't have a name yet.
Trap 3: Treating AI as a department instead of a layer
The most expensive trap. A "Center of AI Excellence" or an "AI Strategy Team" sounds like leadership; it functions like a moat between AI and the people who actually do the work. Everyone outside the department waits for the department's blessing. Adoption stalls.
The pattern that works treats AI like the internet did in 2002: a layer that touches every team. Sales has AI tools. Support has AI tools. Finance has AI tools. Marketing has AI tools. The team running the AI rollout is small (sometimes one person), focused on enablement and procurement, and reports to the COO or the CEO — not to IT.
A 2025 Stanford AI Index finding worth holding: companies that embed AI across functions show meaningfully larger productivity gains than those who centralize it. Centralization is the natural instinct of a company that wants control. Layering is the instinct of a company that wants results.
What good looks like
Here's a real example, anonymized.
A 240-person professional services firm came to me in fall 2024 with the standard symptoms: a "Director of AI" role unfilled for six months, two stalled custom-build projects, leadership tension, and a board asking quarterly what they were doing. We ran a 90-day diagnostic and reset, and by Q2 2025 the firm had three production workflows live:
- AI-augmented proposal drafting. Average proposal turnaround dropped from 11 days to 3. Win rate stayed flat — the time savings are the win, freeing senior staff for higher-value work.
- Inbound lead qualification. A scoring layer in front of the existing CRM cut the SDR's manual triage time by roughly 60%, and the SDR moved fully to outbound. Qualified pipeline grew about 35% in a quarter without adding headcount.
- Client-meeting summaries with action-item extraction. A meeting-notes tool with a custom prompt produced same-day client recaps. Client NPS rose six points in two quarters.
None of these were custom builds. All three were commercial tools (a sales engagement platform, a CRM AI add-on, a meeting-notes service) wired into existing systems with about three weeks of light internal work each. Total annual run cost: under $35,000 across all three. Total productivity gain: estimated at roughly two FTEs of recovered time.
The firm did not hire an AI lead during that period. They cancelled the search.
The 90-day plan
This is the plan I run with mid-market clients. It assumes you have a real budget but limited internal AI expertise — which is roughly 90% of the companies that walk into my pipeline.
Days 1–14: Pick three workflows
In the first two weeks, you don't talk about tools, vendors, or build-versus-buy. You sit with each department head — sales, support, ops, marketing, finance — and you ask one question: where does someone on your team spend an hour a day on something a thoughtful intern could do?
Write the answers down. Pick the three with the clearest before-and-after picture. These become your first deployments. The rule is concreteness: "respond to inbound leads faster" is not a workflow. "Generate a 200-word draft response within 5 minutes of inbound form submission" is.
Days 15–45: Buy the boring stuff
For each of the three workflows, run a fast vendor scan (3–5 commercial tools per workflow), score them against your real-world inputs (real emails, real proposals, real tickets), and pick the one that wins. Negotiate annual deals with cancel clauses. Avoid 3-year contracts; AI vendor pricing is moving fast and you want the option to swap.
If at this stage one of the three workflows turns out to genuinely have no commercial option, defer it to Q2 and replace it with a fourth workflow that does. You're not building yet.
Days 46–75: Wire and train
This is the work. Connect the tools to your existing CRM, helpdesk, document store, or whatever the workflow lives in. Train the actual humans who will use the tool — and train them on what not to use it for, which is harder. Each workflow gets a written one-page runbook so the knowledge survives turnover.
The biggest mistake in this phase is treating tool deployment as an IT project. It is a change-management project. The tool is the easy part. The new working habit is the hard part.
Days 76–90: Measure honestly
Pick one before-and-after number per workflow and measure it for two weeks. Time-to-respond, win rate, ticket volume, hours-saved-per-rep — whatever maps to the original "where does someone spend an hour" question. Publish the result internally. Be willing to kill workflows that didn't move the number; this is rare but it does happen, and the credibility of killing one bad rollout buys you the budget for the next three.
By day 90 you should have three production AI workflows, three measured outcomes, and a draft list of the next three workflows for Q2. That's the rhythm. Three workflows per quarter, twelve per year, paid for out of measured productivity gains.
When to bring outside help
You can run this 90-day plan internally if you have a senior operator with bandwidth, an AI-curious leadership team, and someone who can credibly evaluate vendors without falling for demos.
If any of those is missing, an outside partner is faster and cheaper than the alternative. The math is direct: a 90-day engagement runs in the $30–60k range; a wrong AI hire runs $200k+ in salary and roughly 9 months of lost time before you even know it was wrong.
The job of an outside partner is not to be your AI department forever. It is to do the first 90 days correctly, document the playbook, and hand you a working layer plus the relationships to sustain it. Anyone proposing a 12-month "AI transformation engagement" is selling you the trap, not the plan.
If a 90-day audit on your specific situation would be useful, book a free 30-minute audit call. I'll tell you what I'd do in your first 30 days, whether you hire us or not.
FAQ
How is this different from a standard digital transformation project? A digital transformation project is multi-year and led by IT. This is a quarter-by-quarter operating cadence led by the operator who owns the workflow. The unit of work is a single before-and-after number, not a strategy slide.
Should we wait for the AI hype to settle before investing? No. The hype cycle is largely about model capability headlines that don't affect your day-to-day. The capabilities that matter for mid-market workflows have been stable and shippable for 18+ months. Waiting costs you the head start, not the savings.
What's a realistic budget for the first 90 days? For a mid-market company (50–500 employees), a working first quarter typically runs $40–80k all-in: $15–30k in tooling (annualized), $20–40k in implementation help if you bring in a partner, and 10–15% of an internal operator's time. That's the order of magnitude. Spending materially more in the first quarter usually correlates with worse outcomes, not better.
How do we know we're picking the right workflows? Two tests. First, the operator running the workflow can describe the before-state in concrete time terms ("two hours every Tuesday"). Second, the after-state is measurable in the same units within 30 days. Workflows that fail either test should not be your first three.
What happens to jobs when AI workflows are deployed? The pattern across our client base: nobody loses a job, but some jobs change shape. The SDR who used to triage leads moves to outbound. The senior associate who used to draft proposals moves to client strategy. Productivity gains free senior time, which is where the revenue is. Companies that frame AI as a way to eliminate roles tend to get worse outcomes — adoption stalls because the people running the tools have a reason to want them to fail.
Do we need to worry about data security and compliance? Yes, and the answer is unglamorous: pick vendors with SOC 2 Type II reports, sign DPAs, restrict tools to non-PII workflows in the first quarter, and bring legal in early. Most mid-market AI security incidents I've seen weren't AI failures — they were employees pasting client data into a free consumer chatbot. Solve that with policy and a paid enterprise tool, not with delay.
When should we move from buying tools to building custom AI? When you have run three quarters of buy-first deployments, you have a clear picture of which workflows give your business a defensible advantage, and at least one of those workflows isn't covered by a commercial tool. Before that, custom AI usually costs more, ships slower, and produces something an off-the-shelf vendor will release six months later anyway.
Where can I read what your other clients have done? The case studies are at /case-studies. For deeper analysis on individual layers of the stack, see The Small Business AI Stack: What Earns Its Keep and The First Five AI Automations for a Service Business.
[1] McKinsey, "The state of AI: Global survey," March 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
[2] Goldman Sachs, "Generative AI could raise global GDP by 7%," April 2023. https://www.goldmansachs.com/insights/articles/generative-ai-could-raise-global-gdp-by-7-percent
[3] BCG, "AI Adoption in 2024: 74% of Companies Struggle to Achieve and Scale Value," October 2024. https://www.bcg.com/press/24october2024-ai-adoption-in-2024-74-of-companies-struggle-to-achieve-and-scale-value