AI in HR and recruitment 2026: what UK employment law allows

AI in UK HR and recruitment is now widespread, with use cases spanning candidate sourcing, CV screening, video interview assessment, internal mobility, performance analytics, and workforce planning. The legal overlay in the UK is the Equality Act 2010, the ICO's published guidance on AI and data protection (with specific employment guidance), the UK GDPR rules on automated decision-making, and where firms hire into the EU, the EU AI Act's high-risk classification of AI in employment. This guide sets out where AI is being used in 2026, what the Equality Act requires of screening tools, the ICO position, the extraterritorial reach of the EU AI Act, and a practical control set HR functions can deploy.
Where is AI being used in UK HR and recruitment in 2026?
AI in UK HR in 2026 is concentrated in five workflow areas: candidate sourcing and outreach, CV and application screening, video interview assessment, internal talent mobility and succession planning, and people analytics and workforce planning. Adoption is highest at large employers and in high-volume hiring functions; SME adoption is rising as embedded AI features in standard HR platforms (Workday, SAP SuccessFactors, BambooHR, HiBob, Personio) bring AI capability to firms that have not made a separate procurement.
Candidate sourcing AI scans LinkedIn and other professional networks, drafts outreach messages, and prioritises prospects against role profiles. The legal exposure here is mostly in the messaging and data handling rather than in the decision itself. CV and application screening is where the legal exposure rises sharply: an AI that filters or ranks applicants is making (or materially contributing to) a recruitment decision and is captured by both the Equality Act and ICO guidance on automated decision-making.
Video interview assessment AI (originally most associated with HireVue and similar vendors) is the most scrutinised category. The combination of facial expression analysis, voice analysis, and structured scoring has attracted regulator attention internationally and a growing UK academic and trade union critique. Some major employers have stepped back from facial analysis features specifically; the underlying scoring of structured answers continues to be used. Internal mobility and succession AI raises analogous fairness questions to external screening, with the added wrinkle that protected characteristic data is more often known internally than externally.
People analytics and workforce planning AI sits at the lowest individual-impact end of the spectrum but the highest aggregate-impact end. Models that forecast attrition, identify flight risks, or recommend organisational restructuring affect groups of employees rather than individuals, and the controls focus on accuracy, transparency, and the appropriate role of the model in management decisions. For broader strategy support across the spectrum, see our AI strategy service.
What does the Equality Act 2010 require of AI screening tools?
The Equality Act 2010 prohibits direct and indirect discrimination, harassment, and victimisation against people with the nine protected characteristics (age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, sexual orientation). AI screening tools that produce different outcomes for different protected groups are a credible source of indirect discrimination claims, regardless of whether the discrimination was intended by the employer or by the AI vendor.
Three concepts do most of the work in applying the Act to AI. First, indirect discrimination: a provision, criterion, or practice that applies to everyone but puts people with a protected characteristic at a particular disadvantage, unless the practice can be objectively justified as a proportionate means of achieving a legitimate aim. AI screening that filters disproportionately against (for example) women, older candidates, or disabled applicants meets the indirect discrimination test even if the model has no explicit feature for the protected characteristic, because the disadvantage is what counts, not the mechanism.
Second, the duty to make reasonable adjustments for disabled candidates. AI screening tools must accommodate candidates who request adjustments (an alternative format, more time, an alternative assessment route), and the employer carries the obligation regardless of whether the vendor's tool has been built to support it. Third, the burden of proof: once a candidate puts forward facts from which a tribunal could conclude that discrimination has occurred, the burden shifts to the employer to prove that it has not. Without bias testing evidence, fairness audits, or a documented control framework, that burden is hard to discharge.
The practical implication is that UK employers using AI in recruitment should commission and retain regular bias and disparate impact testing on their screening tools, document the methodology and results, and act on findings that show disparate impact. Equality and Human Rights Commission (EHRC) guidance on algorithmic systems in employment, and the EHRC's wider work on AI and equality, frame the regulator expectation; the EHRC has been clear that employers cannot delegate equality obligations to AI vendors. For the wider compliance framing, see our 2026 UK AI compliance checklist and the professional services industry page.
What is the ICO position on AI in employment decisions?
The Information Commissioner's Office (ICO) treats AI in employment as a high-risk area under UK GDPR and has published specific guidance covering AI and data protection (most recently updated in 2023, with subsequent supplementary guidance), and direct guidance on solely automated decision-making and profiling. The ICO position is that AI in recruitment and HR is permitted but is subject to enhanced obligations on lawful basis, transparency, fairness, accuracy, candidate rights, and Data Protection Impact Assessment (DPIA).
Four ICO obligations shape day-to-day deployment. First, lawful basis: a clearly documented basis for each processing activity, with consent rarely the right basis for employment decisions because of the imbalance of power between employer and candidate. Legitimate interests, contract performance, or legal obligation are usually more appropriate, with the test applied separately for each AI workflow. Second, DPIA: a DPIA is required for AI systems making automated or significantly automated decisions about candidates, and should cover the necessity and proportionality test, the risks to candidates, and the mitigations.
Third, candidate rights under Article 22 UK GDPR: where a decision is solely automated and produces a legal or similarly significant effect (declining a candidate is the canonical example), candidates have the right not to be subject to it without specific safeguards, and to obtain human intervention, express their view, and contest the decision. The practical answer for most employers is to ensure a human is genuinely in the loop on adverse decisions, not just nominally. Fourth, transparency: candidates should be told that AI is being used, what it is doing, and what the routes are to ask questions, request a human review, or object.
The ICO has been explicit that complaints about AI in recruitment are a meaningful share of its employment-related casework, and that employers without DPIAs, fairness testing, or clear candidate transparency are exposed. Enforcement action by the ICO against an employer for an AI-driven recruitment failure would be a high-profile event, and the controls in this section are the practical answer to avoiding it. For implementation help across the wider AI control set, see our AI implementation service.
Does the EU AI Act apply if you hire in the EU from the UK?
The EU AI Act applies extraterritorially to UK firms hiring into the EU where the AI system's output is used in the EU, and AI used in employment decisions is classified as high-risk under Annex III of the Act. The Act entered into force on 1 August 2024, with the high-risk obligations for employment AI taking effect on 2 August 2026. UK employers with EU hiring activity should treat the Act as an active compliance workstream rather than a future obligation.
Three high-risk obligations are particularly relevant for UK employers. First, providers and deployers of high-risk AI must implement a risk management system, with documented risk assessments, mitigations, and ongoing monitoring across the AI lifecycle. Second, data governance requirements apply to training and testing data, including representativeness, bias mitigation, and quality controls. Third, transparency, human oversight, and accuracy and robustness requirements apply, with deployers (the employer using the AI) carrying meaningful duties alongside providers (the AI vendor).
The practical position for a UK employer hiring into the EU in 2026 is to treat any AI screening or decisioning tool used on EU-based candidates as a high-risk AI system under the Act. This means insisting on EU AI Act compliance evidence from the vendor, adding an Act-specific section to the firm's DPIA and AI governance documentation, and ensuring the human oversight, accuracy, and transparency obligations are met regardless of how the vendor positions the tool. UK-only hiring sits outside the Act, but the practical floor for UK employers is rising as the EU framework matures and the ICO and EHRC reference EU practice in their own guidance. For a deeper treatment of the Act and UK SME implications, see our EU AI Act guide for UK SMEs and the strategy section of the Knowledge Hub.
What is a practical control set for AI in HR?
A practical control set for AI in UK HR and recruitment in 2026 has six elements that work together. The table below summarises them; each is straightforward to implement individually, and together they meet the Equality Act, UK GDPR, ICO, and (where relevant) EU AI Act expectations.
| Control | What it does |
|---|---|
| DPIA per AI workflow | Documents the necessity and proportionality test, risks, and mitigations for each AI use case |
| Bias and disparate impact testing | Recurring audit of model outputs against protected characteristics, with documented action when disparate impact is found |
| Human-in-the-loop on adverse decisions | Genuine human review of any AI-driven adverse outcome, with the review evidenced in the candidate file |
| Candidate disclosure and objection route | Clear notice that AI is being used and a working route to request human review or raise concerns |
| Data minimisation and retention limits | Only the data needed for the decision is processed, and candidate data is retained for the minimum period |
| Vendor diligence and DPA | Signed Data Processing Agreement with no-training guarantees, security evidence, and EU AI Act position where relevant |
Treat the control set as the operating baseline, not as best practice for advanced firms. Each control answers an obligation that already exists under UK law and ICO guidance; the EU AI Act adds further structure where EU candidates are involved.
Frequently asked questions
- Is AI screening of CVs legal in the UK?
- Yes, AI screening of CVs is legal in the UK, but it is subject to the Equality Act 2010, UK GDPR, and ICO guidance on automated decision-making. Employers must avoid indirect discrimination against protected groups, conduct a Data Protection Impact Assessment for the workflow, ensure a meaningful human role on adverse decisions, provide candidate transparency, and (where solely automated decisions with significant effect are made) respect Article 22 UK GDPR rights. Bias and disparate impact testing on the screening tool is a practical baseline that materially reduces tribunal and ICO risk.
- Do I have to tell candidates that AI was used in their assessment?
- Yes. ICO guidance and UK GDPR transparency obligations require candidates to be told that AI is being used, in plain language, with information about what the AI is doing and what the candidate's rights are. The disclosure can sit in the privacy notice on the application page, in the application confirmation, or both. Where solely automated decisions with significant effect are made, the disclosure obligation is heightened and Article 22 UK GDPR rights apply.
- How should bias auditing actually be done in practice?
- Bias auditing in HR AI is typically done by sampling decisions or scores produced by the tool, comparing outcome rates across protected characteristic groups (age, sex, race, disability where data permits), and testing whether disparate impact appears at statistically meaningful levels. The methodology should be documented before the audit runs, the results retained, and material findings acted on (model adjustment, vendor escalation, withdrawal of the tool from the workflow). Independent third-party audits add credibility for higher-risk or high-volume deployments and are increasingly common in regulated sectors.
- What is the ICO complaint risk for an HR team using AI?
- The ICO has been clear that AI in employment is a focus area, and complaints about recruitment AI are a meaningful share of its employment-related casework. The risk is highest where a DPIA has not been completed, where there is no meaningful human role in adverse decisions, where candidate transparency is weak or missing, or where bias testing is absent. Each of these is a credible trigger for ICO engagement, supervisory action, and in serious cases enforcement. The practical answer is the control set in the section above, deployed consistently across the AI workflows in scope.
- What if I am a UK employer hiring candidates based in the EU?
- The EU AI Act applies extraterritorially where the AI system's output is used in the EU, and AI in employment decisions is high-risk under Annex III. The Act entered into force on 1 August 2024, with the high-risk obligations taking effect on 2 August 2026. UK employers hiring EU-based candidates with AI tools should treat the workflow as an EU AI Act compliance scope, insist on Act-specific evidence from the vendor, and ensure the deployer obligations on human oversight, transparency, accuracy, and risk management are met for the EU candidate cohort.