AI Act Comes Into Force In EU: How Will It Affect HR?
The European Union’s AI Act came into force, 1st August, marking the first steps in establishing a regulatory and legal framework for the use of AI systems within Europe.
The act categorises AI systems according to their potential impact on safety, human rights and societal well-being. Systems will be categorised as prohibited, high risk and low risk. Most AI systems embedded in HR will come under scrutiny from August 2026, experts say.
Many UK firms will have to follow the EU law despite the UK not being part of the bloc. Because of this, it is thought any UK legislation will have to follow the EU’s rules closely.
High-risk systems, which will be subject to the strictest requirements, are those with a significant impact on people’s safety, wellbeing and rights.
Low-risk applications include AI-enabled video games and spam filters. The vast majority – about 85% – of AI systems currently used in the EU fall into this category, but this proportion may fall as more AI enters the workplace.
Under the Act, failure to comply could trigger fines of up to 7% of a company’s annual global turnover.
Implementation will be staggered, so firms will have time to weigh up the use of systems and establish their own monitoring and compliance procedures.
Six months from now AI practices with unacceptable risks to health and safety or fundamental human rights will be banned; in nine months the AI Office will finalise the codes of conduct to cover the obligations for developers and deployers; and in one year the rules for providers of General Purpose AI (GPAI) – such as ChatGPT – will come into effect and organisations must align their practices with these new rules.
AI in Employment
AI used in employment will have to comply from August 2026; such systems have been included among those with the potential to cause significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law will need to comply.
Thomas Regnier, a spokesperson for the European Commission, said: “What you hear everywhere is that what the EU does is purely regulation and that this will block innovation. This is not correct. The legislation is not there to push companies back from launching their systems – it’s the opposite. We want them to operate in the EU but want to protect our citizens and protect our businesses.”
EU competition chief Margrethe Vestager added: “The European approach to technology puts people first and ensures that everyone’s rights are preserved.”
Among the high-risk AI capabilities to be banned, from February 2025, are biometric categorisation systems that claim to sort people into groups based on politics, religion, sexual orientation, and race. Also banned will be the untargeted scraping of facial images from the internet or CCTV, emotion recognition in the workplace and educational institutions, and social scoring based on behaviour or personal characteristics.
Generative AI
Generative AI models such as ChatGPT, will be subject to the law in August 2025: developers will need to evaluate models, assess and mitigate systemic risks, conduct adversarial testing, report serious incidents to the European Commission, ensure cybersecurity, and report on their energy efficiency.
Law firm Clifford Chance advises that the Act “specifically classifies certain AI systems in employment, such as AI tools intended to be used in recruitment, selection, and decision-making processes related to work-related relationships, as high-risk – and hence subject to strict obligations.”
Nils Rauer, Pinsent Masons partner and AI specialist, said companies looking to be compliant will face a major administrative challenge: “You need to document what you did in terms of [AI model] training. You need to document to some extent how the processing works and … for instance in an HR surrounding, on what basis the decision is taken by the AI to recommend candidate A instead of candidate B. That transparency obligation is new.”
Experts have warned that companies that buy and modify AI tech for HR purposes could be caught in both the “provider” and “deployer” categories.
Transparency
Companies could also fall foul of the law unintentionally as employees in HR use their own ChatGPT for recruitment purposes without management’s knowledge.
Agur Jõgi, chief technology officer of global cloud software firm Pipedrive, told Personnel Today that under the Act, “Transparency is now a critical guiding principle. Companies cannot afford to gloss over the finer details around AI operations. AI can’t be ‘left to its own devices’, it requires human supervision.
“There is a degree of trepidation around runaway dangers from unchecked AI, and it’s up to leaders to reassure the wider community that emerging technology is in safe hands.”
Originally published on Personnel Today, https://www.personneltoday.com/hr/ai-act-comes-into-force-in-eu-how-will-it-affect-hr/