How HR can mitigate the risks and reap the rewards of AI at work

How HR can mitigate the risks and reap the rewards of AI at work


Personnel Today

AI is the science of making machines smart. To avoid any confusion over the terms used when discussing this technology it’s useful to remember that “machine learning” is not quite the same thing: it’s a branch of AI where a programme will identify patterns, learn from data and make decisions (or reach outputs). The computer then continues to gain knowledge to improve processes and run tasks more efficiently (think about your Amazon Alexa at home getting to know your preferences more and more accurately).

Using AI in recruitment

AI tools can help HR teams during the recruitment process, by shortlisting candidates and even conducting video interviews. However, these tools are only as good as their human inputs, however, and if the data inputted is skewed or has a particular bias, the tool may not know any better than to penalise traits in candidates’ applications which the employer had not anticipated or intended.

For example, if an employer gathers copies of past successful CVs and uses this data to target who are likely to be successful candidates – if past hiring practice was to hire mostly men, it’s likely the AI tool will flag CVs of anyone who is not a man as undesirable. Indeed, Amazon had this exact problem and had to scrap the use of the AI tool in question 2018.

To mitigate the risk of discriminatory recruitment practices, employers should:

  • Ask lots of questions – before employers buy any AI software, they should ask the developers what is in the data set and how the tool works, whether any measures have been taken to prevent discrimination (for example, consultation with a diversity consultant/having a diverse team of developers) and whether the developer can give any references from similar employers who have bought their tool.
  • Test – testing the product alongside implementing it; for example, by testing a sample of the live data to check what the results would be if the process had been carried out manually, which could help when defending allegations of unfairness and discrimination. This will help show that the results would have been the same or similar even if the AI tool had not been used.
  • Carry out a pre-emptive equality impact assessment – there is no legislation requiring this for private sector employers but doing so would involve identifying any negative impact the proposed AI may have on certain groups, taking action to redress them or alternatively explaining why the employer has chosen to go ahead.

Monitoring employees

With working remotely now the norm, employers are increasingly using AI tools to monitor employees during the working day to measure productivity and performance, use of social media or to prevent the loss of confidential information. However, employees have a reasonable expectation of privacy in the workplace and so monitoring employees in this way can create both legal and employee relations risks if an employer’s behaviour is not reasonable and proportionate. To mitigate these risks, employers could:

  • Tell employees what they’re doing and set expectations – make sure any data privacy documentation and/or contractual documentation explains to employees the ways in which they will be monitored, what exactly will be monitored (for example, internet use, telephone) and the information relating to them which an employer will hold and for what purpose.
  • Don’t monitor for the sake of it – employers shouldn’t collect more information than is reasonably necessary. For example, a tool that monitors an employee’s every key stroke and mouse movements or switches on webcams is very likely to be excessive.
  • Carry out an impact assessment – if employers are going to monitor on a large scale, they will need to carry out and document a data protection impact assessment.
  • Question the data – employers should consider individual employee circumstances and not take the AI tool’s recommendations at face value. For example, there may be a reason why someone has been less productive on a certain day or at certain times of day – childcare responsibilities for example – and employers should be wary of not indirectly discriminating against employees.

Redundancy

If employers are carrying out large-scale redundancy process where there is a need to interview large numbers of employees, there are AI tools that can conduct interviews to assess employees as part of the redundancy selection process.

Similar to when using AI for recruitment, a tool can be programmed to pick up on and rate more highly certain words. The issues here can arise if employers let the AI tool make decisions without a human element or understanding of how the tool has come to its decisions, and then have to defend unfair dismissal claims where the burden is on the employer to show that they dismissed for a fair reason, followed a fair process and that dismissal was a reasonable response in the circumstances.

Indeed, Estée Lauder had this problem recently when two make-up artists brought a claim and the employer was not able to adequately explain how the AI tool had come to the decision to make them redundant.

To best protect themselves, employers could:

  • Keep a human element – managers need to be able to explain why an employee out of a pool is being made redundant and should be involved at all stages of the process. Think of the AI tool’s outputs as a suggestion rather than a definitive answer.
  • Keep a paper trail – this is key! Document the results of any testing you carry out and keep record of any information you can glean from the AI developer of how the tool comes to decisions and the data set that is inputted.