AI in the workplace

AI in the workplace

The prospect of introducing Artificial Intelligence or “AI” in the workplace context is an exciting one, but for those of us who are still getting to grips with Excel, it can seem risky at best and terrifying at worst! However, AI is already here, is already being widely used in workplaces and is here to stay. When used and monitored properly, it’s an innovative and effective tool for workplace management and can save HR departments significant time and costs. In fact, it’s likely the vast majority of employers and employees use AI systems in one form or another in their day to day working lives as AI can be something as seemingly simple as ‘Spam’ filters on email inboxes.

What is AI and how may it be used in an employment context?

Simply put, AI is the science of making machines smart. There will be an algorithm (computer program) created by a programmer which tells the computer what to do and what decision to make based on the output of that algorithm. A particular branch of AI is machine learning, which is where a program will identify patterns, learn from data and make decisions (or reach outputs). For example, in an interview scenario the algorithm might be that certain buzz words are marked more highly. The program will continue to learn and develop what ‘good’ looks like based on the data that is inputted going forward. Remember AI is based on human input and the quality of the output ultimately depends on what is fed in!

AI can be used across the whole lifecycle of employment from hiring to firing, for example in the following circumstances:

  • When recruiting or screening candidates to sift through CVs to search for key words.
  • During employment to monitor or measure performance e.g. setting productivity targets or to monitor the loss of confidential information.
  • When considering dismissing employees e.g. if there is a large-scale redundancy process and a program can be used to conduct interviews to assess employees as part of the redundancy selection process.

What are the risks of implementing AI systems in the workplace?

Implementing AI systems can be very helpful for employers, but they should be alive to the accompanying risks so that these can be minimised. The main risks of implementing these systems with little moderation are:

  • Indirect Discrimination: when certain groups are placed at a disadvantage due to a certain protected characteristic from the application of the AI software to everyone. An employer will need to show that they had a justification for using the system i.e. that it was the best way of achieving a legitimate aim. For example, employees who were flagged up as being unproductive at work could claim this was unfair as any reduced productivity was due to their childcare responsibilities, potentially amounting to indirect sex discrimination.
  • Failure to make reasonable adjustments: employers have a duty to make adjustments if disabled workers or candidates are placed at a disadvantage due to a requirement or procedure of theirs. Employers should check upfront if the processes could disadvantage disabled candidates or employees and think about what adjustments could be made to the AI system to help remove the disadvantage. Employers should also ask if candidates or employees require adjustments before requiring them to use AI.
  • Direct Discrimination: where someone is treated less favourably on the grounds of a protected characteristic. This will normally be harder for an employee or worker to show than indirect discrimination. However, employers should check the AI algorithm and data set to check it isn’t evidently discriminatory and that it does not become directly discriminatory over time with machine learning.
  • Unfair Dismissal: employees who have been employed for at least two years continuously have the right not to be unfairly dismissed. If an employee is dismissed the burden is on the employer to show that they dismissed for a fair reason, followed a fair process and that dismissal was a reasonable response in the circumstances. The issue here is that if the AI system effectively makes the decision (for example it decides that a trigger point has been reached justifying that dismissal), this is risky, and a human would still need to explain the reasons for this.
  • Data Protection: if monitoring keystrokes and social media activity using AI, employers should consider whether there was a legitimate reason for doing this and that it was necessary and proportionate. There is also a right not have a decision that has legal effect made about you that is based solely on automatic processing - there is a need for human involvement except in very limited circumstances and employees also have the right for the decision to be reconsidered if they request it. As a general point, personal data will go into the AI process and the outcome is also likely to amount to personal data so employers would still need to have lawful ground for data processing and comply with any other data protection obligations.

What should employers do if they do use AI in their business?

If employers do choose to go forward using AI, below are some things they will want to put in place to help minimise the risk of claims and defend them if they do arise:

  • Consider carrying out a pre-emptive equality impact assessment at design stage: this would involve identifying any negative impact the proposed AI may have on certain groups, taking action to redress them or alternatively explaining why (having carried out a balancing exercise to assess the impact on the affected individuals versus the business needs) they have chosen to go ahead.
  • Testing: testing the product alongside implementing it (for example, testing a sample of the live data to check what the results would be if the process had been carried out manually). This could help employers defend allegations of unfairness and discrimination if it can be shown that the results would have been the same or similar even if the AI program had not been used.
  • Keep a human element: if a complaint is raised, a human will still need to be able to understand and explain what AI process was used and why. A human will also need to explain why particular employment-related decisions were made.
  • Regular auditing: regular auditing and spot checks going forward will help employers identify if machine bias is creeping in so proactive steps can be taken to address this rather than reacting to complaints.
  • Continual improvement: software developers will typically be continually improving their products.  Employers will want to ensure those improvements will be reflected in the product they’re buying.
  • Eyes open: there are murmurs that the UK will look to legislate in this area sooner than later, so employers will want to keep their ears the ground to keep abreast of future changes.