How can HR get ahead of AI regulation
A government report into the impact of AI on the workplace has recommended greater regulation. What are the risks of the use of AI in people processes and how can HR audit its use in the organisation?
Printing presses, typewriters, the word processor and mobile phones – new technology transforming how we work is nothing new. But historical advances in technology have, for the most part, been tangible.
However, the increasing use of artificial intelligence (AI) is different. Not so visible to the world at large, its use is on the rise – but how is it affecting our work practices?
AI sits behind many programmes designed to help organisations streamline processes, cut time and costs, and reduce human bias. The increase in homeworking has only served to accelerate its use.
From software that sifts job applications, through to programmes monitoring productivity and technology automatically scheduling work rotas, it is constantly evolving.
Should AI be regulated?
With this accelerated growth come questions – not least around how the use of AI is (or is not) subject to regulation.
The All Party Parliamentary Group on the Future of Work was formed to consider the opportunities and challenges offered by technology in the workplace.
As part of their recent inquiry into AI and surveillance in the workplace, the group published its final report, The New Frontier, Artificial Intelligence at Work.
This revealed that AI in the workplace has increased dramatically in recent years, but meaningful regulation has struggled to keep up.
Evidence from the inquiry shows that this lack of regulation leaves workers exposed to systems of which they have limited understanding and the potential impact of which has not always been fully assessed prior to implementation.
AI in HR
So how is AI being used in an HR context and what developments can the profession expect in light of this report?
For many, AI is perceived to be all about robots. Robots, however, are only the tip of a very large iceberg. AI sits behind a wide variety of technologies that affect our working lives. Here are a few examples
Monitoring software: Most monitoring software – designed to track time (active and idle), monitor movement (keystroke, mouse or physical), or monitor webcam activity – is powered by AI. Many organisations have used this to monitor productivity while employees have been working from home.
Resourcing software: Elements of the resourcing process will often be powered by AI. For example, AI is behind software used to sift application forms or to carry out online assessments. These programmes streamline recruitment processes, freeing up time for more meaningful engagement with candidates.
Data Subject Access Requests (DSARs): As many HR practitioners know, DSARs can be time-consuming. To reduce this burden, programmes powered by AI can identify the relevant personal data from the data sources. While additional checks will usually be needed, the automation of the first “cut” can dramatically reduce the time and spend on conducting these exercises through traditional manual means.
These are just a few examples. Our appetite for innovation continues to grow and the government’s National AI Strategy shows its determination to harness the capacity of AI to transform business.
The benefits to using AI in the workplace are clear. But, as highlighted in the report, the use of AI is not without risk.
What are the risks?
As part of its inquiry the group found that where AI was used in electronic shift allocation systems, workers reported feeling constantly “on call” with a knock-on effect on their wellbeing; these systems remove an element of human autonomy in shift allocation with the resulting lack of choice leading to fears around the ability to meet caring responsibilities, for example.
AI technologies also have the potential to store vast amounts of personal data and with that comes data protection risks.
Equally under the GDPR, there is a right not to be subject to decisions made solely by automated decision-making where it affects an individual’s legal rights. Compliance with data protection legislation always needs to be addressed when implementing AI technologies in the workplace.
We are also starting to see equality law challenges. Whilst AI is often marketed on the basis that it removes human bias, it doesn’t necessarily mean AI outputs are free from discrimination.
For example, legal challenges have been raised in relation to the use of facial recognition software with arguments that it can render inaccurate results when used by workers of black and minority ethnic backgrounds.
Similarly, criticism has been raised in relation to how algorithms target job adverts on social media. Nor is the creation of AI a human-free process – there are coders creating AI, which may result in bias straying into their algorithms. A challenge inherent in the increased use of AI is how to identify and address potential bias at the programming stage.
Is regulation on the horizon?
While the benefits of AI are not to be underestimated, we could be in for a more regulated landscape, and the report makes a number of recommendations.
These include new legislation that would require employers to carry out an AI-specific impact assessment before implementing new AI technologies, as well as a recommendation that workers be entitled to a full explanation of the significant impacts of AI.
Workers would also have to be involved in the design and development of algorithms likely to have a significant impact on them, with trade unions being given a collective right to enforce this, as well as having the right to be consulted when employers look to implement high-risk AI tools.
The timeframe for the introduction of regulation is currently uncertain, however. Employers keen to adopt best practice, in the meantime, may want to do a couple of things:
Audit: Where you already use AI, consider an audit to assess what you have in place and any risks that may need to be addressed.
Due diligence: Consider carrying out an AI impact assessment – including how AI accesses data, the impact on employee wellbeing, any potential discrimination risk, and how AI programmes might affect employment terms (for example, productivity monitoring may impact pay) prior to implementing any new technologies.
Communicate: If your business is considering implementing AI technology that may affect employees, involving them in the conversation early is more likely to encourage buy-in, or at least give you a heads up of where challenges may arise.
Originally published on Personnel Today, https://www.personneltoday.com/hr/how-hr-can-get-ahead-of-ai-regulation/