AI is the science of making machines smart. To avoid any confusion over the terms used when discussing this technology it’s useful to remember that “machine learning” is not quite the same thing: it’s a branch of AI where a programme will identify patterns, learn from data and make decisions (or reach outputs). The computer then continues to gain knowledge to improve processes and run tasks more efficiently (think about your Amazon Alexa at home getting to know your preferences more and more accurately).
AI tools can help HR teams during the recruitment process, by shortlisting candidates and even conducting video interviews. However, these tools are only as good as their human inputs, however, and if the data inputted is skewed or has a particular bias, the tool may not know any better than to penalise traits in candidates’ applications which the employer had not anticipated or intended.
For example, if an employer gathers copies of past successful CVs and uses this data to target who are likely to be successful candidates – if past hiring practice was to hire mostly men, it’s likely the AI tool will flag CVs of anyone who is not a man as undesirable. Indeed, Amazon had this exact problem and had to scrap the use of the AI tool in question 2018.
To mitigate the risk of discriminatory recruitment practices, employers should:
With working remotely now the norm, employers are increasingly using AI tools to monitor employees during the working day to measure productivity and performance, use of social media or to prevent the loss of confidential information. However, employees have a reasonable expectation of privacy in the workplace and so monitoring employees in this way can create both legal and employee relations risks if an employer’s behaviour is not reasonable and proportionate. To mitigate these risks, employers could:
If employers are carrying out large-scale redundancy process where there is a need to interview large numbers of employees, there are AI tools that can conduct interviews to assess employees as part of the redundancy selection process.
Similar to when using AI for recruitment, a tool can be programmed to pick up on and rate more highly certain words. The issues here can arise if employers let the AI tool make decisions without a human element or understanding of how the tool has come to its decisions, and then have to defend unfair dismissal claims where the burden is on the employer to show that they dismissed for a fair reason, followed a fair process and that dismissal was a reasonable response in the circumstances.
Indeed, Estée Lauder had this problem recently when two make-up artists brought a claim and the employer was not able to adequately explain how the AI tool had come to the decision to make them redundant.
To best protect themselves, employers could: