How to build an AI business case: a template for UK SMEs

What is an AI business case?
An AI business case is a structured document that sets out the expected costs, benefits, risks, and assumptions of a proposed AI investment, in a form that leadership can approve or reject. The reason structured business cases matter in 2026 is straightforward: industry research from OneAdvanced reports that 56% of UK CEOs see zero measurable return on their AI investments, and reporting cited in Forbes in February 2026 puts the proportion of enterprise AI pilots that fail to deliver measurable business returns at around 95%. The most common cause is not that AI does not work. It is that the business case did not define the metric before the deployment, so there was nothing to measure against afterwards. A good AI business case fixes that by specifying success in writing before the first line of code or first SaaS contract.
This guide gives a structured template, a defined set of benefit categories, a worked example, and a one-page format suitable for board approval. It is designed for UK SMEs and mid-market firms making investments in the £10,000 to £250,000 range, where formal business-case discipline pays for itself but enterprise-grade governance is overkill.
What belongs in an AI business case
An AI business case has five sections, each with a clear purpose. Skipping any section reliably produces a case that does not survive scrutiny.
- Problem statement. What are we trying to solve, and what is the current cost of not solving it? Quantify the cost in pounds, hours, or error counts where possible. "We need AI" is not a problem statement; "we spend 28 hours a week reconciling supplier invoices and the error rate is 4%" is.
- Proposed solution. Which AI tool or approach, and why this one over the alternatives? Reference the build vs buy assessment that led to this option being recommended over off-the-shelf, configure, or do-nothing.
- Costs. One-off plus ongoing, over a defined period (three years is the standard horizon for SME AI investments). Include implementation, licensing, hosting, training, change management, and an explicit contingency.
- Benefits. Quantified in at least two of the four approved categories below. Pick the categories that fit the use case; do not inflate by claiming all four without evidence.
- Risks and sensitivity. What has to be true for the benefits to land? What happens to the case if a key assumption is wrong by 25%, 50%, or 100%? Naming the sensitivities up front is what separates a credible case from a marketing brief.
Industry research suggests 74% of UK organisations struggle to scale AI value beyond initial pilots, and the binding constraint is usually structural: no business case discipline, no measurement plan, no defined adoption target. The five sections above address each of those directly.
The four approved benefit categories
Use the four categories below as a closed set. They cover the genuine sources of AI business value and exclude the vague claims that make business cases unconvincing.
- Time saved. Hours per week freed up, costed at the average fully-loaded hourly rate of the affected role. Time saved counts only when it can be redeployed to higher-value work or when it is a measurable headcount avoidance, not as a vague productivity claim.
- Cost avoided. Spend that is eliminated or reduced. Examples: third-party services no longer needed, error-correction overhead reduced, customer support costs reduced, recruitment costs avoided through internal capacity gain. Each line item should reference a real current cost.
- Revenue unlocked. New revenue enabled by the AI deployment. Examples: faster sales cycle, higher conversion rate, new product capability that justifies a price increase, grant or contract funding secured because of the AI capability. Revenue cases are stronger when there is a comparable baseline.
- Payback period. Months or years until cumulative benefit exceeds cumulative cost. Not a benefit in its own right, but the summary metric that makes the other categories comparable across competing investments.
Pick at least two categories. Three is typical for a strong case. Claiming all four without evidence is a credibility liability, not an asset.
Hard vs soft metrics (and why hard metrics should lead)
Hard metrics are quantified in pounds, hours, percentage points, or defect counts. Soft metrics are qualitative: employee satisfaction, decision speed, brand perception, employee retention. Both are real, and both have a place in an AI business case.
The reason hard metrics should lead is that business cases led by soft metrics do not get approved by finance functions and do not survive scrutiny after deployment. Six months in, when the CFO asks "did the £40,000 investment pay back?", a case built on "improved decision speed" cannot answer the question. A case built on "average ticket handle time reduced from 12 to 8 minutes, equivalent to one agent's annual cost avoided" can.
Use soft metrics as supporting evidence, not as primary justification. A typical structure: lead with the hard metric in the headline benefit; reference the soft metric in the narrative; capture both in the post-deployment measurement plan. Where the hard metric proves out, the soft metric usually has too. Where the hard metric does not, the soft metric on its own will not save the investment from being labelled a failure.
Worked example: customer service AI for a 40-person UK SME
The example below is illustrative. Every figure is indicative for a UK SME of the stated size and is not a quoted fee from The AI Consultancy.
Problem. A 40-person UK professional services firm has 4 customer service agents handling approximately 80 tickets per day. Average handle time is 12 minutes. First-time resolution rate is 68%. Ticket volume is growing 15% year on year, and the operations director has been asked to add a fifth agent in Q3 at a fully-loaded annual cost of £38,000.
Solution. Deploy an AI triage and drafting tool integrated with the existing helpdesk. The tool drafts initial responses, suggests knowledge-base articles, and routes complex tickets directly to the senior agent. Build vs buy assessment recommended an off-the-shelf enterprise SaaS deployment over a custom build because the workflow is standard and time to value matters more than ownership.
Costs (Year 1). Tool licence £15,000; implementation and integration £8,000 one-off; staff training and change management £2,000 one-off. Total Year 1 cost £25,000. Years 2 and 3 ongoing licence £15,000 per year.
Benefits (conservative estimates, flagged as such). Average handle time reduced from 12 to 8 minutes (a 33% improvement). Effective daily ticket capacity rises from 320 to 480 without adding headcount. The fifth-agent hire planned for Q3 is avoided, equivalent to £38,000 in fully-loaded cost avoidance for Year 1, recurring in Years 2 and 3 as the team continues to absorb growing volume.
Payback. Approximately 7 months on Year 1 cost avoidance alone. Three-year cumulative benefit £114,000 against £55,000 cumulative cost (Year 1 £25,000 plus Years 2 and 3 at £15,000), net £59,000.
Sensitivity. The case depends on agent adoption reaching 60% or higher by Week 8. If adoption is below 40%, benefits drop by approximately half and the fifth-agent hire becomes unavoidable in Q4. If first-time resolution rate falls (for example, because AI drafts are not edited carefully), customer satisfaction risk rises and the soft metric impact would re-open the case.
Run your own version of this calculation using the AI ROI calculator, and substitute your own role costs, ticket volumes, and adoption assumptions.
Sensitivity analysis: what you must flag
Four sensitivities should be called out explicitly in every AI business case. A case that ignores any of these is incomplete, regardless of how strong the headline numbers look.
- Adoption rate. If staff do not use the tool, the benefits do not land. Define a minimum adoption threshold (typically 60% of the affected staff using the tool weekly by Week 8) and note what happens to the case if adoption is below that. Adoption is the single most common reason AI investments under-deliver. See our AI change management and employee adoption guide for the structural patterns that drive adoption rates.
- Data quality. If the AI is pointed at unstructured, outdated, or inconsistent data, outputs degrade and benefits shrink. Quantify the data preparation work in the cost section, and flag the case as dependent on the data work being completed before the AI goes live.
- Process change. If workflows are not redesigned to take advantage of the tool, the time saved is not real time. A 4-minute reduction in ticket handle time is only useful if those minutes are redeployed; left unredeployed, they vanish into longer breaks and slower starts. Flag what process redesign is in scope and who owns it.
- Vendor risk. If the vendor changes pricing, terms, or deprecates the product, the business case re-opens. Note the vendor's last pricing change, the contract length, and the deprecation notice period. For the diligence questions that should be answered before signing, see our AI vendor selection and due diligence checklist.
The one-page board format
Senior approval depends on the case fitting on one page. Anything longer is not read in the meeting. Use the structure below; anything that does not fit goes in an appendix.
- Problem (3 lines). What we are solving and the current cost of not solving it.
- Solution (3 lines). Which approach, why this one over alternatives, and the maturity classification (pilot, production).
- Costs (table). Year 1, Year 2, Year 3, including implementation, licensing, training, contingency.
- Benefits (table). Hard metrics only on the front page. Soft metrics in the appendix. Stated in pounds, hours, or percentage points per year over three years.
- Payback (1 line). Months or years to break-even, plus three-year net.
- Top 3 risks (3 bullets). The three sensitivities most likely to derail the case, with the threshold and the mitigation for each.
- Recommendation (1 line). "Approve a £X investment to achieve Y over Z, with the first checkpoint at month N."
If the case requires more than one page to make, the case is not yet ready for the board. Compress, decide what is essential, and move the rest to the appendix.
Common mistakes that sink AI business cases
Three patterns reliably produce business cases that get rejected at the board or fail post-deployment. The first is a benefits column built on category-fit ("we need AI to keep up") rather than on quantified outcomes. The second is a costs column that omits internal staff time, change management, and the data preparation work needed before the AI can produce useful outputs. The third is a missing or token sensitivity section, which signals to the board that the proposer has not stress-tested their own case. None of these are AI-specific failures; they are business-case failures applied to an AI investment.
Where to start
If you are about to make the case for an AI investment, start with the problem statement and quantify the current cost of inaction in pounds and hours. From there, the solution and costs follow naturally. Use the four approved benefit categories to structure the upside, pick at least two, and run sensitivities on adoption, data quality, process change, and vendor risk. Compress the result onto one page and walk it through with finance before you take it to the board. For broader strategy resources, see the AI strategy section of the Knowledge Hub. To run a structured business-case assessment for a specific use case, see our AI strategy service or run an internal first pass using the AI ROI calculator.
Frequently asked questions
- What is a realistic payback period for an AI deployment in a UK SME?
- For a well-scoped AI deployment with a clear hard-metric benefit (cost avoidance, time saved against a costed role, revenue unlocked against a baseline), a realistic payback period is 6 to 12 months in Year 1. Anything claiming sub-3-month payback should be treated as marketing copy rather than a credible business case. Anything with a payback longer than 24 months is unlikely to survive scrutiny in an SME context, where leadership horizons are typically shorter than in enterprise.
- How do I quantify time saved in an AI business case?
- Multiply the hours saved per week per affected staff member by the average fully-loaded hourly rate of those staff (salary plus employer NI plus pension plus overhead, divided by working hours per year). Time saved only counts as a benefit if it can be redeployed to higher-value work or if it is a measurable headcount avoidance. Vague productivity gains that do not show up in either redeployment or headcount avoidance should be treated as soft metrics, not hard benefits.
- Should I include soft metrics like employee satisfaction?
- Yes, but as supporting evidence rather than primary justification. Lead the business case with hard metrics (pounds, hours, percentage points), reference soft metrics in the narrative, and capture both in the post-deployment measurement plan. Cases built primarily on soft metrics rarely get approved and rarely survive a six-month review. Cases built primarily on hard metrics with soft metrics as backup are far more durable.
- Who should own the AI business case (IT, finance, or the department sponsoring the change)?
- The department whose workflow is changing should own the case, with finance providing review on the costs and benefits, and IT providing review on the technical feasibility and security posture. IT-owned business cases tend to over-index on technology choice and under-index on adoption and process change. Finance-owned cases tend to focus on cost discipline at the expense of upside framing. Department-owned cases force the right conversation about who actually has to use the tool.
- What is the minimum size of AI investment that needs a formal business case?
- Any AI investment above approximately £10,000 in total first-year cost (including implementation and internal time) is worth a formal one-page business case. Below that threshold, a paragraph describing the use case, the expected benefit, and the success measure is usually sufficient. The discipline matters more than the document length: even a £5,000 investment without a defined success measure is hard to evaluate at the six-month review.