AI Facial Recognition is being Used in Work Interviews, Yet It is Likely to Increase Inequality

Artificial intelligence and facial recognition technologies are becoming popular in work interviews. The technology, created by the U.S. company HireVue, analyzes the language and sound of the candidate's voice and records their facial expressions as videos answering the same questions.

It was first used in the UK in September but has been used worldwide for many years. About 700 firms, including Vodafone, Hilton, and Urban Outfitters, have tried this out.

There are essential benefits to be gained from this. HireVue claims that, due to information retrieval speed, it speeds up the recruiting process by 90 percent. But there are significant risks that we should be careful of when outsourcing AI work interviews.

The AI is focused on algorithms that evaluate the applicants against their database of some 25.000 facial and linguistic pieces of knowledge. These are gathered from previous interviews with "successful hires"—those who went on to be good at work. The 350 linguistic elements include criteria such as the sound of a candidate's voice, the use of passive or active words, the duration of the sentence, and the speed at which they speak. Thousands of facial features studied include forehead furrowing, forehead raising, number of eyes opening or closing, lip narrowing, chin raising, and smiling.

As the opponents of AI frequently point out, the main problem is that this technology is not born into a perfect society. It is created within our current culture, characterized by a wide variety of biases, prejudices, disparities, and discrimination. The data from which algorithms "read" to judge candidates contain these current sets of beliefs.

As the opponents of AI frequently point out, the main problem is that this technology is not born into a perfect society. It is created within our current culture, characterized by a wide variety of biases, prejudices, disparities, and discrimination. The data from which algorithms "read" to judge candidates contain these current sets of beliefs.

This illustrates how the algorithms "think" that professors and managers are predominantly white men, while those who do housekeeping are women. And by generating these results algorithms, they inevitably lead to the aggregation, perpetuation, and potentially even amplification of current prejudices and biases. It is for this very reason that we should doubt the intellect of AI. The solutions that it offers are inherently traditional, leaving no space for creativity and social change.

‘Symbolic Capital’

As French sociologist Pierre Bourdieu emphasized in his work on how disparities are replicated, we all have very different economic and cultural capitals. The atmosphere in which we develop, the nature of our teaching, the presence or absence of extra-curricular activities, and various other factors have a decisive effect on our intellectual abilities and strengths. This also has a considerable impact on how we view ourselves – our degree of self-confidence, the expectations we set for ourselves, and our chances of life.

Another prominent sociologist, Erving Goffman, called this a "sense of one's location." This embedded sense of how we should behave causes people with less cultural capital (usually from less affluent backgrounds) to remain in their "ordinary" position. This is also expressed in the language of our bodies and the way we speak. So there are those who, from an early age, have greater faith in their abilities and knowledge. And others have not been exposed to and maybe more timid and reserved, the same teachings and cultural practices. They may also have an inferiority complex.

Any of this is going to happen in work interviews. Ease, trust, self-assurance, and linguistic skills are what Bourdieu called "symbolic capital." Those that possess it would be more effective – whether or not those attributes are necessarily the strongest, or whether they bring something different to the job.

Of course, this has always been the case in society. But artificial intelligence can only reinforce it – mainly when AI is provided with data from candidates who have been successful in the past. This means that companies are likely to hire the same types of people they have always employed.

The big risk here is that all these people come from the same set of backgrounds. Algorithms leave little room for subjective appreciation, risk-taking, or acting on the feeling that a person should be given a chance.

Besides, this technology can lead to the rejection of talented and creative people who do not match the profile of those who smile at the right time or have the appropriate tone of voice. And this could be bad for companies in the long run, as they risk losing talent that comes in unusual ways.

More worrying is that this technology could unintentionally exclude people from diverse backgrounds and give more chances to those who come from privileged locations. As a rule, they possess more outstanding economic and social capital, enabling them to gain the skills that become symbolic capital in an interview environment.

What we see here is another manifestation of more general AI problems. Technology developed using data from our current society, with its different disparities and prejudices, is likely to replicate them in the solutions and decisions that it proposes.

No comments

Comment

Your email address will not be disclosed. The required fields are marked with*.

Related recommendation

No related articles!

微信扫一扫,分享到朋友圈

AI Facial Recognition is being Used in Work Interviews, Yet It is Likely to Increase Inequality