Tackling the AI bias issue

Think AI will elimiate biases? Think again, says Tara O’Sullivan.

The UK government recently announced that the Centre for Data Ethics and Innovation (CDEI) is to investigate the artificial intelligence (AI) algorithms used in justice and financial systems, as these could have gender or racial biases.

As part of its investigation, the CDEI will also look at the potential bias in algorithms used in finance to make decisions as to whether to grant individuals loans, and those used in recruitment to screen CVs and influence the shortlisting of candidates.

It’s a stark reminder of the uncomfortable realities of a digital era in which all our lives are increasingly impacted by the rapid expansion of machine learning and AI technologies, powered by opaque algorithmic tools, that can exacerbate racial or gender bias.

When AI goes wrong

Machines today now routinely make decisions about whether we’re invited to job interviews, are eligible for a mortgage, or subject to surveillance by law enforcement agencies, or insurance companies seeking to crack-down on fraud.

If organisations want to get serious about eradicating unconscious AI bias, they will have to start looking to their recruitment policies to redress the balance in the makeup of their coding and leadership teams. 

Their reach even extends to making decisions about which adverts you see when you’re online — including whether or not you get to see that job advert for a highly paid job role.

Problem is, there’s a growing recognition that we’re now at risk of so-called ‘biased AI’ that can amplify human biases and create ‘feedback loops’ that skew outcomes in an unacceptable way.

Big brand names have already paid the price for unconscious bias in AI. Think of Google’s speech recognition software that struggled to recognise female voices, or Microsoft’s facial recognition software that was found to be less accurate for women and people of colour.

As more and more AI-powered systems come online, debiasing these solutions is becoming a top priority. Especially when you consider that, in the not too distant future, AI will automatically power machines, cars, customer service interactions and even surgical procedures.

When machines can discriminate in harmful ways, it’s time for organisations to sit up, pay attention and act — and it all begins with having a broader representation in the workplace when it comes to the design, development, deployment, and governance of AI.

The AI bias problem – it’s all about people and data

Automation tools are only ever as good as the data that’s fed into them, and the people whose job it is to use the products of those models.

The problem with Big Data, which is the building block of AI, is that no matter how large the dataset, it will be fundamentally flawed if this data is incomplete, doesn’t include data on certain groups, or unintentionally reinforces stereotypes or flawed policies.

 

These biases can manifest in very public demonstrations. A recent example of this was when two African-American men, Rashon Nelson and Donte Robinson, were arrested in Starbucks for behaviours that, if exhibited by any other individuals, would have resulted in a very different outcome.

So, what can companies do to ensure that their AI solutions don’t architect a brave new world where certain people are excluded from participation or inclusion, and no one is evaluated on the basis of their very individual traits?

Addressing unconscious bias in the workplace

The majority of workers creating AI algorithms are predominantly white and male and are prone to hard code their own subconscious bias about race, gender and class into the algorithms designed to mirror human decision-making.

In the near future, companies of every size plan to use deep-learning (AI) powered software to mine their data to better predict outcomes, automate claims and application processing, determine premiums and so forth. Many of these outcomes will represent legally binding decisions that are at risk of being unfair, if the algorithms powering the underlying AI technologies don’t have ‘fairness’ hardcoded in.

Indeed, Gartner predicts that by 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them. Addressing the unconscious bias of programmers is not easy. Individuals and teams are often completely unaware of the prejudices they may hold. 

But unconscious bias training can help people recognise, observe and overcome their prejudices; raising awareness that how we treat customers, colleagues and others must be a fixed state that doesn’t change depending on an individual’s gender, ethnicity, age, religion or sexual preference.



However, education can only go so far when it comes to tacking the unconscious bias problem. What’s really needed is a wholescale overhaul of the entire culture of the organisation.

Workforce diversification

According to the World Economics Forum’s latest Global Gender Gap Report, only 22% of AI professionals globally are female, compared to 78% who are male. Representation among other minorities is also startlingly low.

If organisations want to get serious about eradicating unconscious AI bias, they will have to start looking to their recruitment policies to redress the balance in the makeup of their coding and leadership teams. That includes the representation of those involved with the ethics committees which oversee the design of AI systems that don’t contain algorithmic bias.

Some of the key ways of addressing diversity in the workforce include introducing skills tests that can help eliminate bias in the recruitment process and broadening the recruitment net to consider people that don’t come from a coding background per se but have the appropriate required attributes and an interest in learning coding skills.

Creating a diversity task force that thinks outside the box, rather than focusing narrowly on technical certifications alone, has also proved an effective approach to encouraging different types of people into the field. Similarly, engaging with school and college leavers and offering training and mentoring programmes that grow talent internally, can help widen the demographics of the workforce.

The fact is that diverse teams make better decisions—it’s a proven fact that more diverse teams avoid the problem of groupthink. But changing behaviours depends on transformation of the entire culture of the organisation. Doing so will help prevent unconscious bias and maximise AI’s potential to transform people’s lives for the better.

Without a doubt, failure to address how humans build AI represents a very real risk of perpetuating and exacerbating gender and racial bias. With coding becoming such a high-demand skill set, organisations need to work hard to attract the widest possible base of people to work in STEM (science, technology, engineering and mathematics) fields.

That includes considering people from different backgrounds with the widest possible exposure to life experiences and reskilling them to work alongside AI-augmentation tools and technologies.

 

About the author

Tara O’Sullivan is chief marketing officer of Skillsoft

 

Jon_Kennard

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *