Towards building an empowering culture of Artificial Intelligence

As we move into the era of AI democratisation, it is crucial to foster a healthy human-AI coexistence as Dr. Laura Aymerich-Franch explores

How do we promote an empowering culture of AI within the organization? This article introduces an empowering view of AI that distinguishes fear from risk awareness, and enables assessing risks and opportunities from a balanced, non-emotionally biased approach.

When we have an empowered mindset, we see possibilities

Guiding employees and organisations to effectively move from a powerless to an empowered AI mindset entails not only upskilling and reskilling but also working on underlying psychological aspects that can keep us in a powerless state.

The empowered AI mindset

A powerless mindset is characterised by fear, blockages, lack of accountability, feeling victim of the circumstances, and not seeing possibility. In contrast, when we have an empowered mindset, we see possibilities, feel accountable for our actions and results, know where we are headed, and how to get there.

This distinction also applies when we approach AI. An empowered AI mindset discriminates fear from risk awareness, encourages accountability and upskilling, and enables assessing risks and opportunities from a balanced, non-emotionally loaded approach.

What keeps us in a powerless AI mindset?

Limiting beliefs and biases around AI and robots play a critical role in maintaining a powerless mindset. Beliefs such as “AI and robots will destroy humanity” or “AI is dangerous”, which oftentimes come from science fiction, largely contribute to maintain this state of fear.

For instance, imagine you read in the news “Robot kills human worker”. What mental image does your mind project? Probably a robot killer similar to what you have seen in the movies. These subconscious beliefs might lead to biases, such as confirmation bias, which is a tendency to interpret information in a consistent way with our pre-existing beliefs.

Additionally, we tend to anthropomorphise machines and attribute human characteristics to them. Interestingly, most people know how to answer the question whether ChatGPT is a male or a female in their heads! Consequently, other biases typically found in human-human interaction might extend to human-machine relationships.

Let’s explore some more.

Confidence heuristic describes a type of bias in which we believe some information is accurate just because the person delivering it shows a high level of confidence. Some conversational AI tools, such as ChatGPT, tend to exhibit an overly confident communicative style. As a result, we might fall into the mistake of overestimating its capacity to provide accurate results. Related to the previous, automation bias is a tendency to keep trusting machines even when we are nearly certain they are wrong. A classic example is when we keep blindly following the GPS but deep down we know something is wrong. Automation bias may also lead to oversights, such as not double-checking the reliability of AI outputs.

When we need to take decisions regarding AI we want to do it from a non-emotionally biased approach, that allows us to assess risk and opportunity from a balanced, objective point of view.

Removing bias and limiting beliefs from the equation

Becoming aware of the fact that we might be holding these biases is a good remedy in itself. Start by asking yourself whether you are interpreting some news from a biased or objective approach next time you read about AI and robots.

Also, challenging our limiting beliefs and replacing them with more empowering ideas helps us move to an empowered state. Some limiting beliefs around AI might come in the form of “AI is only for customer service experts”, “I am too old to learn about AI”, “AI is only for males”, “AI is dangerous”, and so on.

Generally, the way to challenge our limiting beliefs is by asking empowering questions. For example, with the limiting belief “AI will destroy humanity” ask yourself what evidence do you have outside science fiction that AI has the plan of destroying humanity?

What about someone who is scared of being replaced by an AI?

The most likely scenario is that we will experience a massive reconfiguration of the work landscape in the next years. Some positions will disappear and some others will appear. Zooming out and observing this scenario from a birds-eye perspective and interpreting it as a reconfiguration rather than as an entire human replacement by AI will help decrease fear. An empowering question you could ask yourself in this case is: “What do I need to do to prepare myself to be competent in this new landscape?”

An empowering culture of AI differentiates fear from risk awareness

To foster a healthy and empowering culture of AI, it is critical to keep in mind that being afraid of AI and being aware of its risks are two different things. Being afraid of AI prevents you from seeing its opportunities. Being aware of its risks and limitations helps you choose the right ones.

Moving to an empowered AI mindset will ultimately facilitate our capacity to evaluate opportunities and risks of AI from a more balanced, non-emotionally biased approach, and help increase competitive advantage.

Dr. Laura Aymerich-Franch is an expert in emerging technologies for behavioural transformation, and founder of Akazest, an evidence-based approach to coaching and behavioural strategy.

Related content:

Laura Aymerich-Franch

Learn More →