Is AI the enemy of diversity?

This article was published in People Management

Each week seems to bring a new horror story about biased algorithms at work. But the evidence suggests automation isn’t always a bad idea

Faced with 5,500 job applications a week during its peak delivery season, parcel carrier Yodel needed to create a process that handled them quickly and consistently to avoid a festive recruitment meltdown. “Hiring at that sort of volume means it’s impossible to run a smooth process delivered solely by humans,” says head of resourcing Ben Gledhill. 

Working with a supplier, the business developed a bot that helps candidates understand which role is right for them and matches them to an opportunity based on location. Another bot looks at the original application and asks additional questions. An impressive 92 per cent of candidates now complete the process, compared to the 58 per cent that used to get through manual telephone screening, and 97 per cent say they are happy with it. 

 “We still believe that human involvement is essential to the recruitment process – for instance, in driver assessments – but that initial filter through the bots now ensures we’re speaking to the right people about the right roles,” adds Gledhill.  

With bots such as this offering to lower costs, speed up hiring and even smooth the candidate journey, it’s no surprise artificial intelligence (AI) investments in recruitment are big business. A well-constructed algorithm can relieve recruiters of many time-consuming and repetitive tasks: from screening CVs to answering frequently asked questions and even covering initial selection stages. “AI can now hold the conversation a recruiter might have. A good chatbot is not just answering FAQs, it sees something in your résumé and finds a role that relates,” says Ben Eubanks, author of Artificial Intelligence for HR

Often the companies selling these tools make even greater promises; that they’ll reduce or even eradicate the role of unconscious bias or human ‘gut feel’. After all, a machine can’t make judgements about a name or choice of university or make a guess at an individual’s age. “As humans, we’re not good at judging people without bias – that’s human nature – so assessments and other tools, or blind CVs, can take out that bias,” adds Eubanks.  

Yodel, like many others, is satisfied it runs a bias-free business. But the received wisdom on AI’s role in encouraging diversity is beginning to change – and as the application of algorithms spreads from hiring into performance management, reward and beyond, that makes understanding the real, unintended side effects of the technology crucially important. 

AI sceptics frequently roll out a 2018 story about Amazon, which scrapped an AI-driven hiring tool because it was making choices in favour of male applicants for software developer positions. Because the algorithm was based on data from past success stories – the majority of whom were white and male – it gave lower scores to candidates with female attributes, such as those who had attended an all-women’s college. 

And earlier this year, a report by New York University research group AI Now raised further questions about fairness, pointing to the lack of diversity among the teams building the systems – 80 per cent of university professors who specialise in AI are male, for example. 

Kim Nilsson, founder of Pivigo, which matches data scientists to businesses, believe there are two major factors at play here: ”First, we have a saying in data science that dirty data in, [means] dirty results,” she says. “The second is diversity in the industry itself. If the data science team is made up of 30-something white males, they may not be introducing bias but they could be missing it. Wherever there is an element of group-think, there’s a risk.” 

Sujay Rao, vice president and general manager at recruitment and consulting company Korn Ferry, sums up the issue: “Amid all the benefits these tools have provided, recruiters ran into a major problem,” he says. “In many cases, AI developed a bias against members of underserved groups, leaving many well-qualified candidates in the dust.” Rao argues that because many women, LGBTQ employees, older workers or those from BAME backgrounds have traditionally not filled certain roles, when AI matches past CVs or job profiles to new ones, it can mistakenly teach itself to look at a narrower group of candidates. 

He adds: “As it turned out, for many roles AI was favouring CVs that contained masculine language – such as ‘executed’, ‘takes charge’, and ‘competitive’ – over feminine language like as ‘collaborative’ or ‘supportive’.” Others have pointed out that tools that detect speech patterns or ‘micro-expressions’ in video interviews could have an adverse impact on certain groups if they start showing a preference for certain vocal patterns or facial characteristics. 

For all the benefits it can bring in terms of speed and efficiency, has AI become the enemy of diversity? Megan Marie Butler, an analyst at AI research company CognitionX, believes this over-simplifies the issue. “Bias in hiring is just one small subset of what AI can do in recruitment. A bot to answer recruitment questions is great for engagement, for example, and would never introduce bias,” she says. 

Problems such as those experienced by Amazon will always raise themselves where organisations use narrow data based on past performance, she adds: “The algorithm itself is not inherently biased, it’s about how you use it strategically. It’s not making a decision for you, it’s just throwing up suggestions.” And AI is invaluable in automating manual tasks where a human recruiter could introduce error – so a hiring manager may change how they look at a CV at 3pm compared to early in the morning, for example. 

But at the latter stages of the hiring process, its role is to support decisions rather than make them. And it should not go ignored that the creators of these algorithms are ultimately human themselves, introducing their own preconceptions and priorities into the process. 

For organisations looking to invest in AI, what does it all mean? “You can’t just throw your hands up in the air and give up,” says Nilsson. “As an employer, you need to ensure your teams are diverse, so make sure you have external individuals looking at the data who will challenge if they see inherent bias. It’s crucial to have checks and balances in place.” 

Nilsson suggests we will see the emergence of ethics boards, where if an individual feels they have been treated unfairly due to an algorithm-driven decision, they can lodge a complaint. “Or we could conduct regular, independent audits of algorithms in the same way companies are audited by accountants – they could be given a stamp of approval,” she adds. Video recruiter HireVue, for example, set up an advisory board made up of AI, privacy and ethics experts to ensure its developers were committed to promoting diversity and fairness. It now says it can strip out data points from video interviews that could introduce bias.

For those using the tools, it’s important to challenge suppliers on how algorithms are built and to review them regularly, says Eubanks. “When I advise HR professionals on buying tech, I tell them to ask about the types of signals the algorithm will be considering when it makes recommendations. If it’s performance data of existing employees, that will likely be biased. They need to think about objective measures such as assessment data.” 

And with hundreds of start-ups in this space, conducting due diligence on the teams behind the system is crucial, he adds. “Some of the companies in this space have no experience in talent acquisition – so they might be basing their algorithms on limited data such as choice of college or experience. These should be red flags for HR. Don’t take their word for it, ask the tough questions.” Once the system is in place, it should provide consistent results, but this shouldn’t be taken as a given and it might be necessary to monitor constantly for signs of bias. 

Rob McCargow, director of artificial intelligence at PwC, argues that bias is not the only issue HR needs to be alive to. “Bias gets a lot of attention, but there are other risks. One of the issues is around the explainability of the technology – so at the more powerful ‘deep learning’ end, the tech is often so complex that even the people who built it can’t explain it,” he says. 

“If it comes up with a decision that’s legally challenged, it has no way of opening up the ‘black box’ and that’s problematic. It’s hard to see where the fault lies.” This is why there should be discussions around the point at which human decisions take over. McCargow adds: “This will depend on user case criticality. So outside of hiring, you might compare the consequences of an algorithm choosing you a movie you don’t like versus a cancer diagnosis. With the latter, you have to be sure of how that conclusion was reached, while you won’t lose much sleep over the former. With automation happening in a range of areas, it’s about considering a spectrum of risk – that could be reputational damage, getting the hire wrong. What level of risk are you comfortable with?” 

While many recruitment processes lend themselves naturally to automation, other aspects of HR are increasingly opening up to AI. In learning and development, machine learning can gather data on how employees access training content and make suggestions, Netflix-style, for how they could build on this. 

In reward, there is potential for algorithms to mine thousands of performance management data points to offer on-the-spot bonuses, or model wage levels in real time, meaning reward professionals can make dynamic decisions around how people are compensated. But the same levels of caution should be exercised when it comes to avoiding bias. 

“You could have data showing people with high performance ratings but that doesn’t show why they perform better,” says Professor Binna Kandola, senior partner at Pearn Kandola and author of Racism at Work. “It also doesn’t show how someone’s performance might have been limited when their ideas were ignored. How should you respond if an algorithm decides you can’t be promoted?” 

Kandola argues that organisations should approach the topic with an idea of where they see themselves: the ambition of the organisation they want to be. “If you’re serious about promoting diversity, this vision needs to be fed into the data analysis. How does getting to a certain point help you achieve that, and how can this be built into the algorithm?” he asks. 

Already, some businesses are using the predictive modelling capabilities of AI to think about workforce planning. PWC’s resource management team uses a work allocation algorithm, for example, that simulates how hundreds of consultants might be deployed on a project and how existing work can be reallocated. It has reduced travel time, improved work-life balance and is less biased in how work is allocated than the human approach, according to McCargow.

Once organisations recognise that the influence of AI is limited by the data we humans feed into it and address that, it can absolutely help HR make better decisions, he adds, and it can open doors for those who might never have made it through a traditional interview process. Getting to that point of machine-led objectivity, however, will still see us needing to negotiate a very flawed and human process first.

 

Read the full article here.

Share This