Most AI spend proposals die in finance, not because the idea is bad but because the numbers don't hold up to a CFO's first three questions. What assumptions did you make? How did you calculate the hours? Why do you think people will actually use it? If you can't answer those crisply, the answer is no, or worse, approval for a limited pilot that never expands.
This guide builds the one-page ROI defense that survives those questions. It's structured around the four discount factors your CFO will apply anyway, whether you account for them or not. Better to apply them yourself, with honest estimates, than to let finance apply them at the table in a way that kills the proposal entirely.
By the end of this walkthrough, you'll have a one-page ROI model you can hand to finance, a before-and-after baseline you measured yourself, and a 90-day reporting plan that tells finance what confirmation looks like. Before you start, the companion white paper AI ROI Defense: 6 Numbers Your Board Wants to See goes deeper on board-level framing if you need to build the case above your CFO.
Why this matters for ops and marketing directors specifically
The ops or marketing director is almost always caught between the enthusiasm for AI and the skepticism of finance. You've seen the time savings. You've watched a 45-minute task take 8. But your CFO lives in a world where productivity projections routinely overstate the realized benefit by 40 to 60 percent, because nobody accounts for the gap between theoretical savings and actual output gains.
Failure to make the case has a real cost. If the AI budget doesn't get approved, competitors move faster, your team stays on slower manual workflows, and you've spent credibility on a proposal that didn't land. Getting the ROI calculation right isn't just about this approval. It's about whether finance trusts your next one.
A disciplined AI ROI case is not hard to build. It's just not the case most people build, which leads with time savings and ignores everything that erodes them.
What an AI ROI model actually does
An AI ROI model is a structured estimate of net value, not a gross productivity number. The mistake most teams make is presenting the gross number: "AI saves our team 20 hours per week." Finance has heard that before, from ERP implementations, from new CRMs, from every productivity suite in the last decade. The gross number is not credible on its own.
A credible model does three things differently:
- Starts with a measured baseline, not an estimate of what the task should take.
- Applies honest discount factors before presenting any savings number.
- Defines what confirmation looks like before deployment, not after.
Think of it as the difference between telling your CFO "this investment will pay off" and showing them a model where the payoff survives their skepticism.
Before you start
Bring three things to this walkthrough:
- A specific recurring task your team does, something with clear boundaries (not "content creation" but "first draft of the weekly client status report"). The more specific the task, the more credible the baseline.
- Two weeks of time logs on that task, ideally real stopwatch numbers. If you don't have them yet, reading this guide first and then running the two-week measurement before building the case is the right order.
- A fully-loaded hourly rate for the role doing the task. Your HR or finance team has this. It includes salary, benefits, and overhead. Don't use just base salary; that will understate the real cost and make your CFO trust the math less.
Before putting any of this into an AI tool for analysis, one thing to settle: what data you're feeding it. We have a dedicated section on the compliance non-negotiables below. It's short but important.
If you want to check your math with a calculator rather than build from scratch, the AI ROI Projection tool at /roi runs these same four discount factors automatically.
Task 1: The raw number nobody believes
The starting point of every AI ROI proposal is a raw time savings number. If AI cuts a 40-minute task to 8 minutes, that's 32 minutes saved. If the task happens 50 times per week across five employees, that's 26.7 hours per week of gross savings. At a fully-loaded rate of $70 per hour, that's $1,869 per week, or roughly $97,000 annualized.
That number is not what you present to finance. It's the starting point, before discounts. Presenting gross savings as realized savings is the most common way AI proposals die in CFO reviews.
What to ask AI to help you build:
I have a recurring task: [task name]. Today it takes [X minutes] and happens [Y times per week] across [Z employees]. The fully-loaded hourly rate for this role is [dollar amount]. After AI assistance, the task takes [X minus savings minutes]. Help me build a gross annual savings calculation and then flag every assumption in that calculation I should be prepared to defend in a CFO meeting.
The prompt forces you to list your assumptions explicitly. That list is what you'll take into the four discount steps that follow. The output isn't the ROI case. It's the first draft to be discounted.
Write the gross number down. Then plan to discount it before it becomes your actual claim. A $97,000 gross savings that survives four honest discount factors and lands at $41,000 net is a stronger proposal than a $97,000 claim finance discounts to $20,000 at the table.
Task 2: The four discount factors
These are the adjustments your CFO will make to your gross savings number, either explicitly or implicitly. Apply them yourself first.
Factor 1: Utilization. AI tools get used at about 50 to 70 percent of theoretical capacity in real deployments. People revert to habits. The tool isn't available when someone needs it fast. The task gets done the old way because it's faster to skip the AI step than to craft the right prompt. A 60 percent utilization assumption is honest for most small business environments in the first year. If you claim 100 percent utilization, finance will discount to 50 percent. If you claim 60 percent, you've already shown you understand the real pattern.
Apply it: multiply gross savings by 0.60.
Factor 2: Redeployment. Saved time doesn't automatically become productive output. An employee who saves 30 minutes per day on a task will spend some of that time on the task more carefully, some on email, and some on nothing that generates revenue. The realistic redeployment rate for small businesses is 40 to 60 percent: roughly half of saved time converts to work that shows up in output metrics. Claiming 100 percent redeployment is the second most common way proposals fail in finance reviews.
Apply it: multiply the utilization-adjusted savings by 0.50.
Factor 3: Decay. Productivity gains from new tools decay over time. The first month is the fastest. By month four, the novelty has worn off, some workflows have reverted, and people are using the AI for a subset of what they tested in the pilot. An annual ROI calculation should weight the savings curve, not assume month-one pace for 12 months. A conservative approach: assume full savings in months 1 through 3, 80 percent in months 4 through 6, and 70 percent in months 7 through 12. That blended rate is roughly 83 percent of the original utilization-adjusted number.
Apply it: multiply the redeployment-adjusted savings by 0.83 as an annual blending factor.
Factor 4: Adoption. Not everyone on the team adopts at the same speed or the same depth. In a 10-person team, expect two or three early adopters who use the tool fully, four or five who use it selectively, and two or three who use it rarely or not at all until the workflow is mandated. An honest adoption factor for a team not yet trained is 65 to 75 percent of the team actually using the tool consistently enough to generate savings.
Apply it: multiply by 0.70.
Combined, the four factors (0.60 utilization, 0.50 redeployment, 0.83 decay, 0.70 adoption) produce a composite discount of approximately 0.175. A gross savings of $97,000 becomes a discounted projection of about $17,000 in year one.
That number is more defensible, not less impressive. When a CFO sees that you applied these factors, the conversation shifts from "I don't believe your numbers" to "which of these assumptions do we want to debate?"
What to ask AI to help you with:
Here are my four discount factors and their assumed rates: utilization [X%], redeployment [X%], decay [X%], adoption [X%]. Apply them to my gross savings estimate of [dollar amount]. Then show me what the outcome looks like under three scenarios: conservative (each factor at the pessimistic end), base (my current estimates), and optimistic (each factor at the high end). Format as a simple table.
Present all three scenarios to finance. A CFO who sees your base case alongside the conservative scenario knows you've thought through the downside. That's more convincing than a single number that looks cherry-picked.
Task 3: What to actually measure
Every ROI model needs a measurement plan, or finance has no way to verify the savings were real after deployment. Define this before you present, not after.
Three measurement categories worth tracking for AI productivity claims:
Task completion time. The most direct measure. For the specific task you identified, log the time from start to finish before and after AI use. Use a stopwatch, not an estimate. Forty observations before and forty after is enough for a defensible comparison. Log who did the task and when, so you can control for experience level and time-of-day variation.
Output volume. If the task produces a deliverable (reports, emails, documents, responses), count how many get produced per week or per month before and after AI use. Volume per unit time is a more business-meaningful number than time saved, and it survives the "but are they actually doing more?" question.
Error or revision rate. Track how often the output requires significant revision or rework before and after AI use. A task that goes faster but produces more rework is not a productivity gain. For most writing, communication, and reporting tasks, AI-assisted first drafts reduce revision cycles, but that's worth measuring rather than assuming.
What to ask AI to help you with:
I need a simple measurement tracking sheet for a 90-day AI productivity pilot. The task is [task name]. The metrics I want to track are: task completion time in minutes, output count per week, and major revision rate. Build me a Google Sheets template with columns for date, employee, task, time-to-complete, output count, revision needed (yes/no). Also suggest which week 4 and week 8 check-ins I should schedule to catch the adoption dip before it becomes a trend.
The week-four check-in matters. The adoption dip is real: by week four, the novelty has worn off, people who hit friction have often reverted to old habits, and the team hasn't yet built the prompt patterns that make AI fast. Catching this with a structured check-in lets you intervene before the dip becomes the story finance hears.
Task 4: The before-and-after baseline
The baseline is the most important part of the ROI case. Without it, every number is a projection. With it, you have evidence.
The standard failure pattern: teams present AI ROI cases built entirely on estimates because nobody thought to measure the baseline before the pilot started. By the time anyone asks "but how long did this take before?", the pre-AI data is gone and you're arguing from industry benchmarks that finance doesn't trust.
Building the baseline correctly takes two weeks and three rules:
Pick the right task. Specific, recurring, and measurable. "Customer email responses" is too broad. "First-response email to a new inbound inquiry" is specific enough to time and count.
Measure, don't estimate. Have the employees doing the task log the time from start to done. Collect 30 to 50 observations. Log who did it, when, and how long. Check for outliers before averaging.
Define "done" before you start. First draft written, or edited and sent? Pick one and stick with it for both the before and the after. Shifting the definition between baseline and post-deployment is the measurement error a skeptical CFO will find.
What to ask AI to help you structure:
I'm building a before-and-after baseline for a task-timing study. The task is [task name]. I need a one-page data collection protocol that tells my team: what to time, when to start the clock, when to stop it, what to log, and how often to submit their logs. Write it for a non-technical audience who has not done a time study before. Flag the three most common ways time studies go wrong.
Running that protocol for two weeks before you touch the ROI model means your baseline numbers are real, not estimated.
Task 5: Presenting it to a skeptic
The CFO conversation has a predictable shape. Four objections come up in almost every AI spend review. Preempt all four.
"These time savings never materialize." The response is the four discount factors you already applied. Your case is built on discounted numbers, not gross projections. Ask finance which factor they'd like to adjust and show them the scenario table. Move from "do we believe the number" to "which assumptions do we debate."
"How do we know people will actually use it?" Your two-week baseline required team participation to measure task times. The people who measured the baseline have already been introduced to the workflow. Adoption for the pilot participants starts ahead of the general rollout rate. Present the 70 percent adoption factor and explain the training plan that keeps adoption above it.
"What happens when the technology changes?" Your response: the ROI case is built on current pricing, and you'll revisit the model at the 90-day checkpoint. Committing to a report-back is not a weakness. It tells finance you're running this like a real investment.
"What's the downside if it doesn't work?" Name it directly. Subscription cost plus 15 to 20 hours of setup and training time. For most small business deployments, $200 to $2,000 total exposure depending on team size and tier. The downside is bounded. Frame the exit condition: if week-eight data shows adoption below 40 percent or savings below 25 percent of the projection, the pilot stops.
What to ask AI to help you prepare:
I'm presenting an AI ROI proposal to a skeptical CFO next week. The proposal covers [task name], [dollar amount] projected annual savings after discounting, and a 90-day measurement plan. Help me draft the two-page executive summary and anticipate the five hardest questions finance is likely to ask. For each, write the one-paragraph answer using the data I've described. Don't soften the objections. Write them as a skeptic would actually ask them.
Asking AI not to soften the objections is the key move. If you practice the harder version of the questions, the actual CFO conversation is easier.
The small-business prompts that actually work
Four prompt habits separate the ROI cases that get approved from the ones that come back for more work.
Specify the exact task, not the category. "Help me calculate ROI for our AI content tools" produces output too generic to present to finance. "Help me calculate ROI for AI-assisted first drafts of client status reports, currently 35 minutes, 18 times per week across three staff" produces output specific enough to defend.
Specify the audience's skepticism level. Tell AI the output will be reviewed by a CFO who has seen productivity claims not materialize. The model adjusts: it flags weak assumptions and structures output to anticipate objections rather than ignore them.
Specify what "conservative" means. If you give AI your four discount factors and ask for a conservative case, it needs to know which end of each range is conservative. Write it out: "For utilization, conservative is 50 percent. For redeployment, conservative is 40 percent." Otherwise you get a lower version of optimistic, not actually conservative.
Specify the format finance expects. Small business CFOs are not reading 12-page analyses. They want a one-page summary: annual subscription cost, year-one discounted savings, payback period in months, three-year projection. Ask for exactly that. The output is more useful than a detailed model nobody requested.
The compliance non-negotiables
This section is short because the rule is simple, but it is the most important section in this guide.
Do not put any of the following into the consumer tier of any AI tool when building your ROI case or running the underlying pilot:
- Employee names, titles, or salary data tied to specific individuals
- Customer or client records, communications, or personally identifiable information
- Financial projections or budget details marked confidential by your organization
- Proprietary process documentation that contains trade secrets or competitive information
- HR data, performance reviews, or personnel files
- Any data governed by a confidentiality agreement the organization has signed
The practical workflow that respects these rules: build your ROI model using anonymized inputs. "Employee A, Role: Marketing Coordinator, Blended Rate: $58/hour" is fine. A named employee's actual salary data pasted from your HRIS is not. The model doesn't need the real data to produce useful output. It needs the structure and the numbers.
For the task-timing pilot, log task times and output counts without logging customer names, order details, or case identifiers alongside. Build the measurement protocol to capture what AI needs for the model (time, count, rate) without capturing what it doesn't need (the underlying data).
If your organization has signed a Business or Enterprise AI agreement with a Data Processing Addendum, the rules on what data can flow into the AI tool are different. Ask your IT director or general counsel what is covered. Do not assume the Business tier means everything is permitted.
When NOT to use AI for the ROI calculation
AI is a useful calculation partner, but it's not the right tool for every part of this process.
- Anything requiring audited financial data. If your ROI case will be reviewed by auditors, presented in a formal board meeting, or used to support a capital expenditure decision requiring regulatory sign-off, the underlying financial inputs need to come from your finance system and be validated by a qualified person, not generated by AI.
- Situations where the ROI case is being used to justify a decision already made. If the decision is made and you're building the case after the fact to justify it, the numbers will be optimistic and finance will know it. The four discount factors only protect an honest estimate, not a backwards-engineered one.
- High-stakes vendor selection decisions. AI can help you structure a vendor comparison, but the final selection decision for AI tools that will touch regulated data, customer data, or core business processes should involve IT, legal, and finance directly. The ROI model is an input to that decision, not a replacement for the full evaluation.
- Any claim you can't defend with your own data. If you can't show the before-and-after baseline for a specific task, don't claim savings from that task. Industry benchmarks for "AI productivity gains" are not defensible in a CFO meeting. Your own stopwatch data is.
A simple rule: AI is an honest partner for building the structure and the scenario analysis of an ROI case. The data that goes into it needs to come from your own measurement, not the model's defaults.
The quick-start template
Here is the prompt scaffold that covers most AI ROI case-building situations for small business ops and marketing directors. Copy it, fill in the brackets, paste into whichever AI tool you use at the appropriate tier.
I need a one-page AI ROI case for a CFO review. Here is the situation:
Task: [specific task name, e.g., first draft of weekly client status report]
Baseline time per task: [X minutes, measured over Y observations]
Task frequency: [Z times per week, across N employees]
Fully-loaded hourly rate: [$X per hour, confirmed with HR]
AI-assisted time per task: [X minutes, measured in pilot]
Subscription cost: [$X per month, X seats]
Discount factors I'm applying: utilization [X%], redeployment [X%], decay [X%], adoption [X%]
Build me: (1) the gross savings calculation, (2) the discounted savings calculation applying my four factors, (3) a three-scenario table (conservative, base, optimistic), (4) the payback period in months, (5) a 90-day measurement plan with three check-in milestones. Format for a one-page CFO summary. Flag any assumption that a skeptical finance leader is likely to challenge.
For recurring use, store this scaffold in your team's shared drive and update the inputs as you run new pilots. The structure stays the same. Only the task-specific inputs change.
Bigger wins beyond the first approval
Once the first AI ROI proposal gets approved and the 90-day measurement discipline is in place, the work compounds.
A repeatable framework for every future AI proposal. The four discount factors, the baseline protocol, the three-scenario table, and the 90-day confirmation plan are not single-use. Every AI spend proposal from this point forward uses the same structure. Finance will recognize it. Trust builds faster because the format is consistent.
A measurement culture that catches the adoption dip. Organizations that build structured check-ins into AI pilots catch the week-four reversion patterns early, intervene with training adjustments, and sustain savings that other organizations lose by week six. The measurement habit is worth more than any single proposal.
A confirmed-results library that compresses future approval timelines. After two or three successful pilots with confirmed 90-day savings, those results become the evidence base for the next proposal. Finance approved the first on a projection. The second on a confirmed result. The third often gets fast-tracked.
A shift from approval-seeking to co-planning. The ops or marketing director who has delivered confirmed results three times is no longer presenting to a skeptical CFO. They're planning together. That relationship change is the long-term ROI of doing the measurement honestly once.
The small-business AI consulting connection
Building a credible AI ROI case is one skill in one category. The bigger question is structural: which AI investments are worth making, in which order, and with what governance so the savings are real and sustained rather than theoretical.
Most small business AI adoption fails not because the tools are bad but because the rollout has no measurement plan, no governance, and no one accountable for whether the savings appear. Finance rejects the second proposal because the first didn't deliver.
The AI Consulting for Small Business page covers what an AI advisory engagement looks like at the small business level: how to sequence investments, what governance prevents the adoption dip from becoming a culture problem, and how the financial modeling works when it's built on real measurement rather than vendor benchmarks.
The companion white paper AI ROI Defense: 6 Numbers Your Board Wants to See covers the six metrics that distinguish a defensible AI investment from a speculative one. Read it before any board-level presentation.
Closing
The CFO approval is not the win. The win is 90 days later, when you walk back into finance with actual data showing the savings were real, and use that result to get three more initiatives approved in the same meeting.
Build one ROI case this week. Time one real task before and after. Apply the four discount factors honestly. Present the conservative scenario alongside the base case. Set the 90-day checkpoint. That sequence is the difference between a proposal that lands and one that gets sent back.
If you want to talk about how AI fits into your organization at the program level, the AI Consulting for Small Business page lays out the full picture and how an engagement works.
Let's talk about your AI + SEO stack
If you'd rather skip the how-to and have it shipped for you, that's what I do. Start a conversation and we'll figure out the fastest path to results.
Let's Talk