Charles Hipps navigates the tricky world of diversity.
One of the most frequent questions I get asked when talking about the attractiveness of adopting artificial intelligence into recruitment techniques to help boost diversity is what happens when machines discriminate?
It is an interesting debate and I imagine a lot of people going into training will be wondering the same thing. The reality is – like any traditional talent acquisition methodology – balancing the risks and opportunities of artificial intelligence is imperative.
There’s little doubt about the benefit of hiring a diverse workforce. McKinsey found in one UK study that greater gender diversity on the senior-executive team could positively affect performance. For every 10% diversity increase, they saw profits rise by up to 3.5%.
Organisations know it’s needed, but often lack the ability to ensure diversity hiring is happening within their own talent acquisition programs. Big Data adds that crucial piece and has huge potential to offer some reshape to the role of the modern HR professional.
The reality is – like any traditional talent acquisition methodology – balancing the risks and opportunities of artificial intelligence is imperative.
In recruiting, Big Data reveals which applicants are a better fit for positions within the company, correlating skills and work values to numbers and percentages. It’s not about the name on the CV or cultural background of an interviewee.
Instead, companies can focus on the candidates with the right expertise, experience and potential to be productive within their already established teams, provided the humans in the equation eschew their own biases.
To take just one example, in an increasingly competitive job market, an organisation may receive applications from hundreds of highly-qualified, hopeful graduates for just a few vacancies. Often, it will take a disproportionate amount of human effort to sift through them.
Crucial experience, context or personal attributes may be lost in the morass of information. Application forms may be divided amongst several people who each take a slightly different approach. Some may not be given due attention, simply because they are considered at the end of a long day. As a result, gifted candidates may be overlooked due to human fallibility or unintentional bias.
To meet these challenges, AI systems might be put to effective use in conducting an initial review of applications, to produce a shortlist of candidates for interview.
Similar automated decision-making processes could be deployed effectively in other scenarios – for example, in making bonus assessments or recommending promotions. This would, in turn, free up both HR and management to focus on other things.
However, in recognising the opportunities that AI brings, we must also be mindful of the possible pitfalls. In particular, workers and job candidates are protected from discrimination related to certain protected characteristics (such as age, disability, sex, race, sexual orientation and religion or belief).
When asking machines to make decisions for us, there remains a risk that they will throw up potential discrimination issues. Used well, blind recruitment can help ensure demonstrable insights that show differences as a strength. Protected data is not used as a criteria by the machine to make decisions.
But it is crucial to keep in mind that whilst blind recruitment limits the impact that unconscious bias may have on sifting by removing information that has nothing to do with past success or experiences, such as one’s name, nationality or area of residence – it cannot end cultures alone. A degree of personal responsibility is required still.
Yes, blind recruiting does allow businesses to visibly demonstrate a commitment to demonstrating how they work hard to ensure a fair and inclusive environment for all, where the unique insights, perspectives and backgrounds of individuals are valued. But after this, the company must physically show this and prove their worth.
The recognition of such a need is clearly out there already. In its most recent Global CEO Survey, 77% of CEOs told PriceWaterhouseCoopers they already have a diversity and inclusion strategy or plan to adopt one in the next 12 months.
And the talent they want to recruit supports this view: other PwC research shows that 86% of female and 74% of male millennials consider employers’ policies on diversity, equality and inclusion when deciding which company to work for.
In practice, very few employers are likely to use AI to make decisions that they know will result in less favourable treatment because of a protected characteristic (known as ‘direct discrimination’). However, what about unforeseen discriminatory outcomes arising from the use of AI?
For example, a machine may make automated decisions (or influence humans in making non-automated decisions) across a large population with a roughly equal gender split, but which inadvertently place women at a particular disadvantage. Unless the approach can be objectively justified as a proportionate means of achieving a legitimate aim, it will constitute unlawful ‘indirect discrimination’.
Similarly, where employers have a duty to make reasonable adjustments to level the playing field for disabled workers, this would need to be factored in to any machine learning processes. There are also questions over who might be liable for any discriminatory conduct or reputational damage.
Therefore, it makes sense to adopt a collaborative approach that is aimed at spotting issues early, agreeing who is responsible for putting them right and refining automated processes to avoid repeat mistakes and looking at adopting internal guidance for employees who use (or, as the case may be, develop) AI tools and an external policy or agreement which sets out clearly how discrimination issues will be managed.
About the author
Charles Hipps is CEO & Founder of WCN