In the last few years, the growth of artificial intelligence (AI) tools has been promoted as a solution for the lack of diversity in the workforce, from using CV scrapers and chatbots to schedule potential candidates, to software for analyzing video interviews.
Those who designed the technology claim it avoids human biases against ethnicity and gender during recruitment, instead using algorithms that read speech patterns, vocabulary, and even facial micro-expressions to evaluate massive pools of job applicants for the appropriate personality type and “culture fit.”
Nevertheless, in a new study published in Philosophy and Technology, scientists from Cambridge’s Centre for Gender Studies contend these claims make certain uses of AI in hiring slightly better than an “automated pseudoscience” reminiscent of phrenology or physiognomy: the shunned beliefs that personality can be inferred from skull shape and facial features.
They state that it is a risky example of “technosolutionism:” turning to technology to offer rapid solutions for fundamental discrimination problems that require investment and alterations to company culture.
The scientists have partnered with a group of Cambridge computer science undergraduates to expose these new hiring methods by designing an AI tool based on the technology, available at: https://personal-ambiguator-frontend.vercel.app/.
The “Personality Machine” shows how arbitrary variations in facial expression, lighting clothing, and background can deliver radically varied personality readings—and so could be the difference between progression and rejection for a generation of job hunters competing for graduate positions.
The Cambridge team state that the use of AI to filter through candidate pools may eventually increase uniformity instead of diversity in the workforce, as the technology is built to look for the fantasy “ideal candidate” of the employer.
This could see those with the appropriate training and background “win over the algorithms” by duplicating behaviors the AI is taught to recognize, and taking those unscrupulous attitudes into the workplace, say the scientists.
Furthermore, as algorithms are refined using past data, they debate that candidates believed to be the best fit are likely to end up with those that most closely look like the existing workforce.
“We are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers,” said co-author Dr. Eleanor Drage.
By claiming that racism, sexism, and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world.
Dr. Eleanor Drage, Study Co-Author, University of Cambridge
The team points out that these AI recruitment tools are frequently proprietary—or “black box”—so how they function is unknown.
“While companies may not be acting in bad faith, there is little accountability for how these products are built or tested,” said Drage. “As such, this technology, and the way it is marketed, could end up as dangerous sources of misinformation about how recruitment can be ‘de-biased’ and made fairer.”
Regardless of a little pushback—the EU’s recommended AI Act categorizes AI-driven hiring software as “high risk”, for example—scientists state that tools created by companies such as HIreVue and Retorio are set up with minimal regulation, and point to surveys signifying the use of AI in hiring is increasing.
A study of 500 organizations across numerous industries in five countries carried out in 2020 discovered that 24% of businesses have applied AI for recruitment tasks and 56% of hiring managers are said to be ready to adopt it in the following year.
Another survey of 334 leaders in human resources, carried out in April 2020, as the pandemic came to be, discovered that 86% of organizations were integrating new virtual technology into hiring practices.
This trend was in already in place as the pandemic began, and the accelerated shift to online working caused by COVID-19 is likely to see greater deployment of AI tools by HR departments in future.
Dr. Kerry Mackereth, Study Co-Author, University of Cambridge
Dr. Kerry Mackereth presents the Good Robot podcast along with Drage, in which the two experts investigate the ethics of technology.
According to HR operatives interviewed by the scientists, COVID-19 is not the only factor.
Volume recruitment is increasingly untenable for human resources teams that are desperate for software to cut costs as well as numbers of applicants needing personal attention.
Dr. Kerry Mackereth, Study Co-Author, University of Cambridge
Mackereth and Drage explain that several companies currently use AI to examine videos of candidates, inferring personality by examining regions of a face—akin to lie-detection AI—and scoring for the “big five” personality tropes: conscientiousness, agreeableness, extroversion, openness, and neuroticism.
The undergraduates behind the “Personality Machine,” which uses a similar method to uncover its defects, say that although their tool may not assist users in beating the algorithm, it will offer job hunters a flavor of the types of AI scrutiny they might experience—possibly even without their awareness.
All too often, the hiring process is oblique and confusing. We want to give people a visceral demonstration of the sorts of judgments that are now being made about them automatically.
Euan Ong, Student Developer, University of Cambridge
“These tools are trained to predict personality based on common patterns in images of people they’ve previously seen, and often end up finding spurious correlations between personality and apparently unrelated properties of the image, like brightness. We made a toy version of the sorts of models we believe are used in practice, in order to experiment with it ourselves,” Ong stated.
Drage , E. & Mackereth, K. (2022) Does AI de-Bias Recruitment? Race, Gender, and AI's ‘Eradication of Difference’. Philosophy & Technology. doi.org/10.1007/s13347-022-00543-1.