Stop building racist technology

Raafi-Karim Alidina takes on the biases in technology.

Recent events, particularly the more global prominence of the Black Lives Matter movement, have led many organisations to realise the need to change.

They have recognised the need to be more actively anti-racist in the way they operate if they want to attract the best talent, get the most out of their employees, and make the most optimal decisions. This desire has led to many organisational leaders being tasked with adapting their processes to mitigate any implicit or explicit biases.

Many figured out the best way forward is to automate as much of their work as possible. The premise behind this idea is that since humans are prone to biased decision-making, let’s just remove humans from the equation and leave as many decisions as we can up to machines.

The problem with this theory, though, is that humans still have to create the machines or algorithms, and we are realising that this is leading to biased technology.

Coded racism

If you’re a person of colour, you are probably familiar with the fact that most of automatic taps, hand dryers, and soap dispensers in public bathrooms don’t work as well for you as they do for your white friends.

This is because the object detection systems in those products are built on a training set that just isn’t that diverse. The pictures of hands the machine uses to learn when to dispense the soap or water is made up mostly of light-skinned hands.

Non-diverse teams are less likely to notice when a training set of images or voices or names isn’t representative of users.

One study found these training sets contain 3.5 times more pictures of light-skinned hands than dark-skinned ones. As a result, the machine learns that it should really only dispense soap for light-skinned people.

This is annoying in soap dispensers, but what happens when we put those same object-detection systems in self-driving cars?  A study at Georgia Tech University in the US found that many of the autonomous vehicles being developed that used these systems were significantly worse at detecting – and so stopping for – dark-skinned pedestrians. 

This could lead to more Black and brown people dying. 

This is just one example, but there are dozens more, from Amazon’s biased hiring algorithm, to Google’s failed launch of their reverse image search (which identified Black people as ‘gorillas’), or even a lot of voice recognition software that can’t accurately detect the voices of Black people (nor of many accents).

Why does this happen?

Two main issues lie behind these mistakes in the way we develop our technology:

  1. Our teams aren’t diverse enough. When a coding team is made of people who are all of a similar ethnic background, it’s much more likely they’ll have a blind spot when it comes to race. Non-diverse teams are less likely to notice when a training set of images or voices or names isn’t representative of users.
  2. Our teams aren’t inclusive enough. Even when the coding team is very diverse and representative, often those from marginalised or non-dominant groups don’t feel psychologically safe. They don’t feel comfortable speaking up, voicing dissent, or expressing a different point of view. 

 

What can we do?

Machine learning technology can be used to help mitigate our biases – but only as part of a conscious overall approach. The key is not relying on machine learning entirely. If we are careful and deliberate in the way we build our products, we can reduce the biases coded into our algorithms. Here are three things you can do:

  1. Add an inclusion ticket to your product development process. This should be completed before a product goes to market. This could be a check from someone who isn’t on the development team to ensure any inclusion blind spots are covered. We often do this with legal tickets, and we can do it with inclusion as well.
  2. Leverage the diversity of our teams by adding ‘inclusion checks’ at different points of the development process. In this scenario, teams would actively ask everyone to specifically comment on whether they can see any potential bias in the code. 
  3. Add questions at the beginning and end of a product’s lifecycle to notice and reflect on one’s own identities and biases (and those of the team as a whole). By doing this at the beginning, developers and designers can put themselves in an inclusive mindset as they create their products. At the end, this acts as a retrospective learning exercise to improve future design.

These are just some of the techniques tech teams could use to ensure their biases don’t get coded into their products. Whatever you choose to implement, notice that none of these solutions are passive.  To stop coding racist tech, we need to take active steps and build them into the way we do our work every day. If we don’t consciously include, we will unconsciously exclude.

 

About the author

Raafi-Karim Alidina is a consultant at Included.

 

Jon_Kennard

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *