Rolling out ChatGPT or Claude in a UK SME: the first decisions

Most UK SMEs roll out ChatGPT or Claude badly the first time. They buy the cheapest tier, hand it to staff, write a one-paragraph "use it responsibly" email, and find six months later that personal data has been pasted into consumer accounts, no Data Processing Addendum exists, leadership has no usage view, and the bill is rising without a measurement framework attached. The fix is not more sophisticated AI; it is four decisions made up-front. This guide sets out the tier choice, the data residency choice, the governance choice, and the exit plan, with a UK GDPR baseline, a worked cost example for a 25-person London professional services firm, the rollout failure modes that recur in our advisory work, and a 30/60/90-day plan to deploy AI properly the first time.
The four decisions to make before buying any seats
Four decisions sit upstream of every other ChatGPT or Claude rollout step in a UK SME. Most failed deployments trace back to one of them being skipped or made implicitly.
The tier decision. Claude Pro and ChatGPT Plus are individual consumer tiers with no Data Processing Addendum and no admin controls; they are categorically wrong for any business workflow involving personal data, regardless of how convenient they are. Team and Business tiers add a DPA, basic admin controls, and explicit non-training defaults on business inputs. Enterprise tiers add SSO, SCIM, finer admin controls, audit logs, and a higher support ceiling. The right tier is decided by whether the firm processes personal data, whether SSO is mandatory, and whether audit logs are required by sector regulation, not by the seat price.
The data residency decision. ChatGPT Enterprise added native UK data residency at rest on 24 October 2025, which is the cleanest UK residency story on the OpenAI side. Claude Opus 4.7 on AWS Bedrock is available at launch in Europe Ireland and Europe Stockholm, and on Google Cloud Vertex AI through an EU multi-region endpoint; both are EU residency rather than strict UK residency at the time of writing. The direct claude.ai surface still processes primarily in the United States. The right residency answer depends on the sector and the data sensitivity, not on the platform branding.
The governance decision. Every UK SME deploying ChatGPT or Claude needs a written AI acceptable use policy, a defined data classification scheme, a vendor approval list, and a measurement framework before the first seat is provisioned. Without these, the rollout becomes a shadow IT problem within months. With them, the rollout becomes a managed change programme.
The exit decision. Vendor lock-in is real; both providers create switching costs through Custom GPTs, Claude Projects, and bespoke connector configurations. The exit plan is not a contract appendix to be written at end of life. It is the documented policy the firm follows during the rollout: keep prompts in source control, keep retrieval layers on owned infrastructure, document the model behind every workflow, and review the platform annually rather than at end of contract.
Tier choice grid
The grid below maps tier to typical fit and UK GDPR posture across both providers. The decision rule that follows surfaces straight from the rows.
| Tier | Best fit | UK GDPR posture | Notes |
|---|---|---|---|
| Claude Pro | Individual researcher, evaluation only | No DPA on consumer tier | Wrong tier for any business workflow involving personal data |
| Claude Team | 5 to 50 person teams, knowledge work, light agents | DPA, non-training default | 20 US dollars per seat per month annual; minimum 5 users |
| Claude Enterprise | 50+, regulated sectors, agentic workloads | DPA, SSO, audit logs | 20 US dollars per seat plus usage at API rates; usage projection required |
| ChatGPT Plus | Individual researcher, evaluation only | No DPA on consumer tier | Wrong tier for any business workflow involving personal data |
| ChatGPT Business | 2 to 150 person teams, broad knowledge work | DPA, non-training default | 20 US dollars per seat per month annual; new Codex seat available alongside |
| ChatGPT Enterprise | 150+, regulated sectors, omnimodal workloads | DPA, SSO, SCIM, UK residency at rest, audit logs | Custom-priced, typically 45 to 75 US dollars per seat; 150-seat minimum |
| API via AWS Bedrock (EU regions) | Automation-heavy workloads, residency-strict | DPA via AWS; Opus 4.7 in Ireland and Stockholm at launch | Different unit economics; engineering ownership required |
| API via Google Vertex AI EU | Automation-heavy workloads, residency-strict | DPA via Google; EU multi-region endpoint | Different unit economics; engineering ownership required |
The decision rule that surfaces from the grid is straightforward. If the team is under 5 people and processing only synthetic or non-personal data, a paid individual tier may be defensible. If the team is over 5 people or processing any personal data, Team or Business is the floor. If the team is over 150 people, in a regulated sector, or under contractual obligations to maintain SSO, Enterprise is usually the right tier on whichever platform is preferred. If the workload is automation-heavy and seats would be a wasteful price model, the API via Bedrock or Vertex AI is the correct architecture. For a model-level deployment guide that pairs with this tier framework, see our Claude Opus 4.7 and GPT-5.5 deployment guide.
UK GDPR essentials
UK GDPR applies in full to any AI deployment that touches personal data, with no exception for "AI is just a tool". Five rules carry most of the weight.
Article 28 DPA. A signed Data Processing Addendum compliant with Article 28 must be in place before the firm sends personal data to the AI vendor. Both Anthropic and OpenAI publish standard DPAs on business and enterprise tiers; consumer tiers do not have a DPA, which is why consumer-tier use for any business workflow involving personal data is an Article 28 violation in itself, regardless of any other controls.
Lawful basis. A lawful basis under Article 6 (and where applicable Article 9 for special category data) must be established for the processing. For internal employee use cases, this is usually legitimate interest with a documented Legitimate Interests Assessment. For client-facing workflows, the lawful basis is normally either contract necessity or legitimate interest, with explicit privacy notice updates.
The consumer-tier prohibition. Even one instance of an employee pasting client data into Claude Pro or ChatGPT Plus is a notifiable breach in the wrong circumstances. Most UK SMEs choose to ban consumer-tier accounts on the corporate device estate rather than try to police use case by use case.
Residency and transfers. ChatGPT Enterprise added native UK data residency at rest from 24 October 2025. Claude Opus 4.7 is available via AWS Bedrock in EU Ireland and Stockholm or via Google Cloud Vertex AI EU multi-region; the direct claude.ai surface still processes primarily in the United States. International data transfers under the UK GDPR adequacy framework with the United States are permitted under the UK-US Data Bridge, in force from 12 October 2023, but the firm's risk appetite still drives the residency choice. ZDR (zero data retention) clauses are available on enterprise contracts and are the cleanest backstop for sensitive workflows.
ICO guidance and DPIA. ICO guidance on AI and data protection continues to set the regulatory bar, including the requirement to complete a Data Protection Impact Assessment for higher-risk processing. The DPIA threshold is reached for most customer-facing AI use cases and many HR ones, regardless of platform. For a wider 2026 compliance picture, see our 2026 UK AI compliance checklist.
Internal AI policy: a one-page checklist
A workable AI acceptable use policy fits on a single page and answers six questions. UK SMEs routinely fail at AI rollout because the policy either does not exist or runs to twenty pages and is therefore unread.
Data classification. Data must be classified into at least three tiers before AI is used on it: green (already public), amber (internal but not personal), and red (personal data, client data, regulated data). Each tier has a defined permitted-use scope. Red data may go only to the approved enterprise platform with a DPA in place; amber to the same plus approved internal tools; green anywhere.
Approved tool list. The policy names the specific platforms approved for business use, the tier on each, and the SSO and admin controls in force. New tools require security review and approval before staff can sign up. The list is reviewed quarterly.
Prompt logging. Whether prompts are logged, where, for how long, and who can access them. Most UK SMEs choose to log prompts and outputs for at least 90 days for audit, governance, and quality purposes, with explicit notice to staff.
Prohibited use cases. The policy lists what AI must not be used for: making hiring or termination decisions without human review, generating final legal advice or medical advice without practitioner review, processing health data outside the approved tier, and any output that is shared externally without verification. Prohibitions are specific, not aspirational.
Vendor approval list. The list of approved vendors and the renewal cadence. Out-of-list tools require security review before use.
Escalation route. The named person to contact for AI-related security concerns, suspected breach, or a use case the policy does not cover. The escalation route is mandatory; an AI policy without a named owner is a document, not a control.
Cost worked example: a 25-person London professional services firm
A worked example helps anchor the cost framing. Consider a 25-person professional services firm in London (a small accountancy practice, mid-sized law firm, or boutique consultancy) buying ChatGPT Business or Claude Team for the first time.
Year-one all-in cost has five components: seats, training, admin overhead, integrations, and a contingency. Seat cost at 25 users on either ChatGPT Business or Claude Team at 20 US dollars per seat per month annual is roughly 6,000 US dollars per year, or about 4,800 GBP at indicative exchange rates. Training cost for an SME first-time rollout is typically 2,000 to 4,000 GBP for a one-day all-staff session plus follow-up clinics, depending on whether the work is internal or commissioned. Admin overhead, including the time of the IT lead or operations manager owning the rollout, is typically 1,500 to 3,000 GBP at small-firm internal cost, recognising that someone has to write the policy, run the security checks, and stand up SSO. Integrations, including SSO, the connector layer to the firm's document management system, and any RAG retrieval setup, can run anywhere from 1,000 GBP for a configuration-only deployment to 10,000+ GBP for a custom retrieval layer; for a 25-person firm, 3,000 to 6,000 GBP is typical. A 10 to 15 per cent contingency on the year-one total covers measurement gaps and unforeseen platform changes.
The honest year-one total for a 25-person London professional services firm is therefore roughly 12,000 to 20,000 GBP, depending on the depth of the integration work and the appetite for training. This is several times the seat cost alone, which is the figure most CFOs initially anchor on. Year-two cost falls materially, because the rollout overhead is one-off; ongoing cost is dominated by seats and a smaller training and governance budget. The cost discipline is to model the total, not the seat price. For sectoral context, see our professional services industry page.
Common rollout failure modes
Five failure modes recur across UK SME AI rollouts in our advisory work. Each is preventable with up-front decisions.
Shadow IT. Staff sign up for personal Claude Pro or ChatGPT Plus accounts because the corporate rollout is too slow, too restrictive, or unclear. The firm ends up paying twice (corporate seats plus reimbursed personal subscriptions) while losing visibility into which client data is being pasted into which consumer surface. Mitigation: a clear approved tool list, an active block on consumer-tier sign-up from corporate email domains, and a fast track to provision new seats.
Prompt logging gaps. The firm enables AI but does not log prompts or outputs, so when a complaint or suspected breach arises there is no evidence trail. Mitigation: enable prompt logging at the platform level, define retention, and document the staff notice.
Lack of data minimisation. Staff paste entire documents (engagement letters, client files, payroll exports) into AI tools when they need the answer to a single question. The data exposure is dramatically larger than the question requires. Mitigation: train staff on the principle of minimum necessary input, and where possible use a RAG layer that retrieves only relevant passages.
No measurement framework. The firm cannot answer whether AI has produced any measurable benefit at the end of year one. The contract auto-renews on inertia or is cancelled on a vibe. Mitigation: define three to five outcome metrics on day one (time-saved per task, error rate, client response time) and instrument them at rollout, not at renewal.
No exit plan. The firm builds Custom GPTs, Claude Projects, and connectors against one platform without recording the prompts, retrieval logic, or workflow definitions outside the platform. Migration cost at year three is therefore disproportionate. Mitigation: keep prompts in source control, document workflows in writing outside the platform, and treat the platform as one component of an architecture rather than the architecture itself.
30/60/90-day rollout plan
A 30/60/90-day plan compresses a UK SME ChatGPT or Claude rollout into the time it would otherwise drift across. Each phase has a specific deliverable.
Days 1 to 30. Decide tier, residency, governance, and exit. Sign the DPA. Stand up SSO and SCIM where the tier supports them. Write the AI acceptable use policy and circulate for review. Provision seats for an initial pilot group of 5 to 15 users covering finance, operations, and a client-facing function. Define three to five outcome metrics. Run a one-day all-staff training session for the pilot group. Block consumer-tier sign-up at the email domain level. End of phase: a documented baseline ready for measurement.
Days 31 to 60. Roll out to the rest of the firm in a controlled wave. Hold weekly office hours for staff questions. Review prompt logs weekly for policy violations and use-case patterns. Capture the first round of measurement against the baseline. Tighten the AUP based on what staff are actually doing. Stand up the integrations that the pilot identified as load-bearing (typically a document management connector and a CRM connector). End of phase: firm-wide deployment with a measurement signal.
Days 61 to 90. Capture the formal year-one outcome metrics. Run a structured review of the rollout: which use cases delivered measured value, which did not, what governance gaps surfaced, and what the next-quarter priorities are. Decide whether to expand seats, add a second platform for specific functions, or move automation-heavy workloads to the API. Document the decision. End of phase: a written one-page review covering measured outcomes, governance lessons, and next-quarter priorities.
For implementation support across this rollout, see our services page and our financial services industry page. For a deeper financial baseline on AI ROI, see our 90-day AI ROI framework.
Frequently asked questions
- Which tier do I actually need: Plus, Business or Team, or Enterprise?
- Plus and Pro are individual consumer tiers with no Data Processing Addendum and are wrong for any business workflow involving personal data. Business or Team is the floor for any UK SME with more than 5 staff or any processing of personal data. Enterprise is the right tier for firms over 150 people, regulated sectors, or any contractual obligation to maintain SSO and audit logs. The decision is driven by data sensitivity and SSO requirements, not by the seat price.
- Is Claude or ChatGPT a better fit for a 25-person UK SME?
- Both fit at this scale and the seat-cost difference is essentially flat at 20 US dollars per seat per month annual on either platform. Pick on dominant use case: Claude tilts toward finance, operations, contract review, and long-document work; ChatGPT tilts toward sales, marketing, omnimodal customer interaction, and broad knowledge work. Most 25-person firms standardise on one and let specialists request the other through a small number of additional seats or via the API.
- What is the difference between consumer-tier and business-tier use under UK GDPR?
- Consumer tiers (Claude Pro, ChatGPT Plus, ChatGPT Pro for individuals) do not include a Data Processing Addendum. Sending any personal data through them is an Article 28 violation regardless of any other control the firm has in place. Business and enterprise tiers include a DPA, an explicit non-training default for business inputs, and admin controls. The practical implication is a hard ban on consumer accounts for any business workflow involving personal data, enforced at the corporate device level rather than left to staff judgement.
- Do I need a DPIA for an internal AI rollout?
- If the AI is used to process personal data and the use case is novel, large-scale, involves automated decision-making, or affects vulnerable individuals, a Data Protection Impact Assessment is required under UK GDPR Article 35. Most customer-facing AI use cases and many HR ones reach the threshold. The ICO publishes a DPIA template for AI deployments. Internal-only use cases on synthetic or non-personal data may not need a formal DPIA, but a documented assessment of the data flow is still good practice.
- What does the new Codex seat in ChatGPT Business mean for a non-engineering SME?
- Nothing material if the firm has no engineers using Codex. Standard ChatGPT Business seats remain the right purchase for typical knowledge-work users at 20 US dollars per seat per month on annual billing (a 5 US dollar reduction from 2 April 2026). Codex seats are a separate, usage-based seat type designed for engineering teams driving Codex agent loops; mixing them with standard seats is supported in the same workspace, so a non-engineering firm simply does not buy them.
- How long should a UK SME AI rollout actually take?
- A 30/60/90-day plan is the right cadence for a 10 to 250 person firm. Days 1 to 30 cover the four up-front decisions, the AUP, SSO, and a 5 to 15 person pilot. Days 31 to 60 cover the firm-wide wave and the first round of measurement. Days 61 to 90 cover the year-one review and the platform decision for year two. Firms that try to compress the timeline below 30 days routinely skip the policy and measurement work, which is the source of most year-one regret. Firms that drift past 120 days lose momentum and visibility.
- What goes wrong most often in the first six months of an AI rollout?
- Five failure modes account for most of it: shadow IT (staff using consumer accounts because the corporate rollout is too slow), prompt logging gaps (no audit trail when an incident arises), lack of data minimisation (whole documents pasted in when only a question is needed), no measurement framework (year-one renewal decided on inertia), and no exit plan (Custom GPTs and Claude Projects creating disproportionate switching costs). Each is preventable with up-front decisions and a written AUP.