AI is all around us, and nowhere more so than the HR department. AI-driven recruitment tools, for example, are becoming common. They ‘learn’ the selection criteria and CV scoring of human recruiters and then apply those same principles across a wider and deeper pool of talent than any human team could hope to cover.
The AI systems also aim to eliminate any unconscious bias humans might have (for example, in relation to school background, nationality or gender). However, while these tools have a place, they also create a new barrier: if unconscious bias is present in the source material (i.e. the human decisions that the AI ‘learns’ from), the technology risks simply applying that bias more efficiently and effectively than any human could. The AI cannot make a conscious effort to overcome those biases in the same way a human would. Microsoft’s ‘Tay’ bot showcased this problem; when Twitter users gave it racist views, the bot later embedded and mimicked those prejudices.
You can read the full article here.