GDPR and AI assistants for UK private practitioners: a practical compliance view

At a glance
- UK GDPR applies to any AI assistant that processes personal data, regardless of whether the data is "merely a name" or detailed sensitive information. The threshold for being in scope is low.
- A hosted cloud AI provider is almost always a processor when the buyer routes prompts containing personal data to it, and a Data Processing Agreement is required. Some configurations involve joint controller analysis.
- The third-country transfer question is the single biggest compliance friction. UK personal data routed to a US-based AI provider needs a transfer mechanism: the UK-US Data Bridge, Standard Contractual Clauses with supplementary measures, or a route via a UK or EU regional cloud endpoint.
- A DPIA is required where AI processing is high-risk. For most private-practice AI deployments, the precautionary read is to run a DPIA regardless.
- On-premises deployment is the lowest-friction compliance route. No third-party processor, no third-country transfer, no cross-border data flow, and a much shorter DPIA.
- This article is technical and architectural, not legal advice. The DPO or external counsel signs off the legal analysis; we provide the technical input that supports it.
What UK GDPR actually requires for AI tooling
UK GDPR is the post-Brexit UK equivalent of the EU GDPR, supplemented by the Data Protection Act 2018. The framework is largely unchanged from the EU position in substance, with a small number of UK-specific divergences and the UK ICO as the supervisory authority.
For a UK private practitioner deploying an AI assistant, the obligations that recurrently come up are:
- Lawful basis for processing. The practice needs a lawful basis under Article 6 for any personal data the AI assistant touches. For most professional services, this is "performance of a contract" with the client or "legitimate interests" of the practice, with the appropriate legitimate interest assessment where relevant. Special category data (health, racial or ethnic origin, religion, biometrics) needs an Article 9 condition in addition.
- Transparency. The practice's privacy notice has to describe the categories of recipient that personal data is shared with. An AI assistant routed to a US-based hosted provider is a category of recipient that needs to appear in the privacy notice.
- Data minimisation. Only the personal data necessary for the specific purpose should be processed. Sending a full client file to an AI for a question that only requires a subset of fields is a data minimisation question, not just a workflow question.
- Accountability. The practice has to be able to demonstrate compliance, not just be compliant. The DPIA, the AI policy, the records of processing, and the supplier due diligence file are the documents that demonstrate it.
- Security. Article 32 requires appropriate technical and organisational measures. For AI tooling this typically means access controls, encryption in transit and at rest, audit logging, and a documented incident response process.
- International transfers. Personal data transferred outside the UK needs a transfer mechanism. The Schrems II analysis applies post-2020.
Is the AI provider a processor?
The answer in almost every commercial AI deployment is yes. A processor under UK GDPR is any third party that processes personal data on the controller's behalf. When the practice sends a prompt containing personal data to a hosted LLM, the provider is processing that personal data on the practice's behalf and is therefore a processor.
The implications:
- A Data Processing Agreement is required. Hosted AI providers, including Anthropic and OpenAI, publish standard DPAs at enterprise tiers. Consumer-tier subscriptions typically do not provide a DPA, which is one of the principal reasons consumer subscriptions are usually not appropriate for personal data processing in a practice context.
- The DPA has to address Article 28's required terms: subject-matter, duration, nature and purpose of processing, types of personal data, categories of data subject, sub-processors, security measures, audit rights, and so on.
- Some configurations involve joint controller analysis. Where the AI provider has independent purposes for processing the data (training, product improvement), the boundary between processor and joint controller is sometimes contested.
In a Private AI Concierge deployment, by contrast, the local AI agent is not a third-party processor. The agent runs on hardware owned and operated by the practice, on the practice's network. The practice is the data controller and the only processor. There is no DPA to negotiate because there is no third party in the chain to negotiate with.
Hybrid mode reintroduces the processor question for the workloads that are routed to the cloud LLM. The DPA, the supplier due diligence, and the transparency obligations all apply for that subset of processing.
The third-country transfer question
The single biggest compliance friction in UK GDPR for AI tooling is the third-country transfer analysis. Most major AI providers are US-based companies with US-based core processing, even where they operate UK or EU regional endpoints.
The Schrems II judgment in 2020 established that transfers of EU personal data to the US needed supplementary measures beyond Standard Contractual Clauses. The post-Brexit UK position broadly follows the EU framework with UK-specific transfer mechanisms.
The transfer mechanisms available in 2026:
- The UK-US Data Bridge. A UK-specific extension of the EU-US Data Privacy Framework, in force since October 2023. Allows transfers to certified US recipients without further safeguards. Anthropic's coverage status is what the practice should confirm at the time of deployment.
- Standard Contractual Clauses (UK Addendum). The UK ICO's adapted SCCs work alongside the EU SCCs for UK-relevant transfers. Often used in conjunction with supplementary technical and contractual measures.
- UK or EU regional endpoints. The Claude API delivered through AWS Bedrock at the UK South region, or through Google Cloud Vertex AI at EU regions, can keep the data within UK or EU residency. This avoids the third-country question for the inference workload, though the contractual relationship with the provider remains a separate consideration.
- Adequacy. Not currently available for general-purpose US transfers; the UK relies on the Data Bridge for the equivalent regime.
For a Private AI Concierge deployment in local-only mode, the third-country transfer question does not arise. No personal data leaves the UK or even the buyer's network. The DPIA section on international transfers reduces to "not applicable" for the local-only scope.
For Hybrid mode, the third-country transfer analysis applies for the workloads that route to the cloud LLM. The default routing endpoint in our Hybrid configuration is AWS Bedrock at UK South or EU Ireland, which keeps the inference within UK or EU residency. Where the practice prefers the direct Anthropic API for capability or commercial reasons, the UK-US Data Bridge is the relevant transfer mechanism and the DPIA documents the reliance.
When is a DPIA required?
UK GDPR Article 35 requires a DPIA where processing is "likely to result in a high risk" to the rights and freedoms of data subjects. The ICO publishes a list of indicators that point to high risk. For AI tooling in a private practice, the indicators that recurrently apply include:
- Innovative use of new technology.
- Processing of special category data (health, in clinical practice; potentially religious or ethnic data in immigration practice).
- Processing on a large scale, where the practice handles many client matters.
- Processing that involves automated decision-making or profiling, even where the human-in-the-loop pattern is followed.
For most private-practice AI deployments, the precautionary read is to run a DPIA regardless of whether one is strictly required. The DPIA is the document that demonstrates accountability if the ICO ever asks, and it is a document of professional record that survives changes of staff, changes of provider, and changes of practice ownership.
A DPIA structure for a Private AI Concierge deployment
The structure below is a starting template. It is the technical and architectural form of the DPIA; the lawful-basis analysis and the data subject rights impact assessment remain the work of the DPO or external counsel.
- Description of the processing. The on-premises hardware, the agent framework, the inference engine, the model, and the channels and tools the agent has access to. State the deployment mode (local-only or Hybrid).
- Necessity and proportionality. Why the AI assistant is necessary for the practice and why the on-premises route was chosen over alternatives. The minimisation analysis: which data classes the agent has access to and why.
- Risk identification. The risks specific to AI deployment, including hallucination affecting client work, prompt injection from external content, accidental disclosure through channels, and the operational risk of a drifted local stack.
- Mitigations. The technical and organisational measures applied to each risk. Includes the agent's tool surface, the channels' access controls, the device security posture, the retainer's CVE response process, and the human-in-the-loop pattern for any client-facing output.
- Hybrid policy (where Hybrid mode is enabled). The data classes permitted to be routed, the routing rules, the cloud endpoint, the transfer mechanism, and the supplier DPA reference.
- Data subject rights. The practice's process for handling subject access requests, rectification, erasure, and the AI-specific right to object to automated decision-making.
- Review schedule. Quarterly review of the DPIA against the current technical and operational state. Trigger conditions for an out-of-cycle review (model upgrade, hybrid policy change, channel addition, major CVE).
The DPIA document does not need to be long to be defensible. Most well-structured private-practice DPIAs run 8 to 15 pages.
A defensible AI policy for a small UK private practice
The DPIA describes the deployment. The AI policy describes how staff use it. For a 5-person practice, a defensible AI policy typically covers:
- Permitted use. The named workflows the assistant is configured for, and the implicit boundary that anything outside those workflows is not approved.
- Prohibited use. Categories of data that must not be entered (for example, third-party privileged material received under undertaking, material covered by court orders, material from clients who have not been informed of AI use).
- Human-in-the-loop. The requirement that any output reaching a client, a court, or a regulator has been reviewed by a competent human. The assistant produces drafts, not finished work.
- Logging and audit. The expectation that interactions with the assistant are logged and may be reviewed.
- Incident reporting. The internal process for reporting suspected AI errors, accidental disclosure, or any other incident.
- Training. The mandatory induction for new staff and the periodic refresh requirement.
The policy should be short, readable, and signed by every staff member. A 4-page policy that everyone has read is more defensible than a 30-page policy that nobody finished.
Why on-premises is the lowest-friction compliance route
Working through the above, the on-premises route is materially easier to document for UK GDPR than the cloud route. The reasons:
- No third-party processor in the chain (in local-only mode).
- No third-country transfer (in local-only mode).
- No DPA to negotiate, no supplier due diligence file to maintain.
- The transparency obligation in the practice's privacy notice is shorter and simpler.
- The DPIA's international transfer section reduces to not applicable.
- The supervisory authority risk profile is lower because the deployment topology is more conservative.
This is not the same as saying the cloud route is non-compliant. Cloud LLMs under proper DPAs at UK or EU regional endpoints are workable for most UK businesses, and the major hosted providers have invested heavily in the compliance infrastructure to make that route defensible.
The honest framing is that for UK private practitioners, particularly solo solicitors, IFAs, family offices, and clinicians, the cloud-route compliance overhead is high enough that the on-premises route is sometimes the simpler answer commercially as well as legally. The decision is not about which route is "compliant"; it is about which route best matches the data sensitivity, workflow mix, and operational tolerance of the specific practice.
Where to start
If you are considering an AI assistant in a UK private practice and need to work through the GDPR analysis as part of the procurement, the starting point is a free 30-minute scoping call. We work through the data classes the assistant would touch, the workflows in scope, the transfer position you are willing to accept, and the deployment topology that fits.
The relevant service page for the on-premises route is Private AI Concierge. The companion articles on local AI vs cloud AI and Private AI for solicitors cover related angles.
This article is a technical and architectural view. The legal sign-off remains with the practice's DPO or external counsel.
Frequently asked questions
- Is a hosted AI provider always a processor under UK GDPR?
- Almost always, when the buyer routes prompts containing personal data to it. The provider processes the personal data on the buyer's behalf, which is the definition of a processor under UK GDPR Article 4(8). Some configurations involve joint controller analysis where the provider has independent purposes for processing the data. A Data Processing Agreement is required in either case.
- What is the UK-US Data Bridge and how does it apply to AI tools?
- The UK-US Data Bridge, in force since October 2023, is a UK extension of the EU-US Data Privacy Framework. It allows transfers of UK personal data to certified US recipients without further safeguards. The relevance to AI tooling depends on whether the specific AI provider is a certified recipient. Buyers should confirm the certification status at the time of deployment, not assume it.
- Is a DPIA always required for an AI assistant deployment?
- Required where processing is likely to result in high risk to data subjects. The ICO's high-risk indicators include innovative use of new technology, special category data processing, processing on a large scale, and processing involving automated decision-making. For most private-practice AI deployments, the precautionary read is to run a DPIA regardless of whether one is strictly required, because the DPIA is the accountability document that survives staff and provider changes.
- Does on-premises AI eliminate the need for a DPIA?
- No. UK GDPR still applies to any personal data processing, including processing on the buyer's own hardware. What on-premises does eliminate is the third-party processor analysis, the third-country transfer question, and the supplier due diligence file. The DPIA is shorter and more straightforward, but it still exists.
- Can the AI policy be a single document for the whole practice?
- Yes, and we recommend it. A 4-page AI policy that every staff member has read is more defensible than a 30-page policy that nobody finished. The policy covers permitted use, prohibited use, human-in-the-loop, logging, incident reporting, and training. It complements the DPIA but addresses staff behaviour rather than technical architecture.