If you’ve applied for a job recently you’ve almost certainly encountered some kind of online screening test or tool which means your application was reviewed by piece of software before a human. In fact, your application may never have got anywhere near a human being if you were rejected in the early rounds of your job search.
More and more companies use recruitment algorithms
While AI in recruitment has been used to hire staff for around the past decade, ‘hiretech’ has boomed in recent years. Demand for AI-based recruitment has risen particularly since the pandemic, thanks to its convenience and fast results at a time when HR staff may be off due to Covid-19.
What types of algorithms are used in recruitment?
AI provided by New York-based Pymetrics is used in the initial recruitment processes of a number of global companies, such as McDonald’s, JP Morgan and accountancy firm PWC. It consists of a series of game-like tests with the answers given designed to evaluate aspects of an applicant’s personality and intelligence, such as risk tolerance and responsiveness. The AI assesses the personality of the jobseeker on the basis of their answers, matches them to the desired profile for the job or organisation, then passes or fails them. Often no human is overseeing the early stages of recruitment that this type of AI is used for.
Another provider of AI recruitment software, Utah-based HireVue, records videos of job applicants answering interview questions via their laptop’s webcam and microphone. The audio is then converted into text, and its AI analyses it for keywords, such as using “I” instead of “we” in response to teamwork-focused questions. The recruiter can choose to let the AI reject candidates without having any human check the result, or have the candidate moved on for a video interview with an actual recruiter.
With the average CV apparently taking 34 seconds to be read by a recruiter, it’s easy to see the appeal of recruitment-based AI that could assess and evaluate thousands of applications in a fraction of the time. According to a 2018 report from LinkedIn 67 per cent of hiring managers and recruiters worldwide said AI was saving them time, 43 per cent said it removes human bias, and 31 per cent said it generates the best candidate matches.
Those behind AI in recruitment say it’s more fair and impartial than any human recruiter can possibly be. So, could AI in recruitment be an example of technology being used for good, in applying a consistent evaluation process across all candidates regardless of background, gender, or race for example? Unfortunately, several high profile examples have shown that machines used in recruitment are far from neutral.
Bias found in AI in recruitment
In 2018 Amazon was widely reported to have scrapped its own internal AI recruitment tool, because it showed bias against female applicants. A year into developing the internal AI project, the programers realised that the algorithmic system wasn’t good at identifying potential: It was merely good at identifying men.
Amazon’s AI was trained on the previous 10 years of successful CVs sent to Amazon, which were mostly men. Unsurprisingly, the algorithm trained on that data replicated the bias towards men that Amazon had shown in the past. It negatively scored words like ‘women’ – as in a women’s club or sport – on a CV and marked any which included women’s universities as ‘less preferable’. Amazon did try to adjust the algorithm to make it less biased, but the company eventually withdrew its AI recruitment system altogether.
In 2019, the nonprofit Electronic Privacy Information Center filed a complaint against HireVue with the Federal Trade Commission alleging that its use of AI to assess job candidate’s video interviews constituted “unfair and deceptive trade practices.” Partly in response to the criticism, HireVue announced last year that it had stopped using a candidate’s facial expressions in the video interviews as a factor its AI considered. A third-party audit of HireVue’s algorithms recommended several other areas where the company could do more to eliminate potential bias, for instance in investigating the way in which their system assesses candidates with different accents.
Human bias becomes recruitment AI’s bias
All this illustrates, of course, that AI is vulnerable to human-based biases in its design. Machine learning is a mirror that reflects the behaviours and attitudes of its ‘teachers’ – the datasets it has been trained on. If one specific sector of the population serves as its model when it’s being trained, then the AI will learn to discriminate against, and exclude, other parts of the population – in exactly the same way that humans doing the job have done before it.
The answer for us, as in so much technology, lies in slowing the rush to outsource our time-consuming but important decisions to machines, before we can be absolutely sure that they aren’t replicating, or even rewarding, biases and behaviours we are trying to avoid. As algorithmic bias has been shown to be widespread in AI used across a wide range of applications, from financial services to criminal justice, we are currently a long way off. Bad news for anyone currently applying for a job.
For more about how technology is changing the way we live, learn and love, and what we can all do about it, pick up a copy of my new book.