AI at the top: Pressure, paralysis and performative action in the C-suite

AI is a board-level priority, yet research shows many executives lack the skills to lead it safely and effectively. Wendy Lynch explores the widening AI leadership gap, the huge risks of moving too fast or too slowly, and why a new ‘AI translation’ role may just be the missing link.

Artificial intelligence has jumped from an interesting demonstration project to a core pillar of corporate strategy with mind-spinning speed. Three-quarters of corporate leaders expect the technology to transform their industry within three years. Plus, over 70% of CEOs consider AI a top investment priority in 2026. Everyone agrees it matters. But a growing gap exists between the perceived importance of AI and leaders’ ability to steer it.

A knowledge crisis at the top

A Gartner survey found that only 26% of executives rated their C-suite peers as confident and proficient in AI. Even more striking, only 44% of Chief Information Officers, the executives historically responsible for technology, are considered “AI-savvy” by their own CEOs.

Looking ahead, 82% of CEOs believe strong AI skills will be required for C-suite professionals, yet only 41% are confident in their skills today. This creates a dangerous gap between ambitious AI declarations and daily reality. The problem isn’t lack of interest but lack of skills and support for turning that ambition into practice.

Faster than the speed of business

We can attribute some of the AI leadership gap to the unprecedented rate of evolution in AI. Headlines in 2024 focused on misinformation and hallucination from leading Large Language Models (LLMs), leading many to dismiss them as untrustworthy and unready for prime time. Yet, in the following 18 months, AI capabilities have advanced at lightning speed. They have become more accurate, and also attained skills that far surpass human speed and abilities in mathematics, programming, information gathering, problem solving, and many other areas. Beyond generating content and imagery, agentic models can accomplish tasks autonomously, correctly and continuously.

From that standpoint, one can empathise with busy C-level leaders who believed AI would be important “eventually,” but not right away. Unfortunately for them, the oncoming train of AI whizzed by them in 2025.

Risks in both directions

Companies beginning their AI journey now face dual risks: falling behind or rushing into production without necessary guardrails.

The gap between fast adopters and laggards is widening. Moving cautiously can leave companies stuck in “pilot purgatory,” forever testing and rarely deploying. Conversely, leaping ahead risks legal, ethical, and reputational exposure. AI systems in lending, hiring, pricing, or claims can cause real harm if not designed and governed carefully.

The paradox of board pressure

Layered on top is board pressure. Directors push management to “do something with AI” quickly, hearing about it from other boards, media, and investors. Yet these same board members often lack the knowledge to provide meaningful guidance.

A Deloitte Global survey of nearly 700 board members and executives found that 66% of boards still have “limited to no knowledge or experience” with AI. Even more concerning, 31% of organisations don’t have AI on the board agenda, and only 2% of board members are highly knowledgeable about AI.

The paradox is stark: boards without AI expertise are pressuring executives who lack AI confidence to move quickly on technology neither group fully understands. Leaders with limited AI skills are making high-stakes decisions about deployment, investment, and governance. Often, the result is performative AI (initiatives that look good in slides but don’t create lasting value). It’s a dangerous cycle where pressure substitutes for preparation.

This situation is genuinely difficult. Executives juggle macroeconomic uncertainty, supply chain issues, and talent shortages while AI evolves at breakneck pace. Expecting every executive to become deeply technical is unrealistic. But the absence of basic AI literacy creates a dangerous vacuum: leaders who don’t understand AI making high-stakes, future-defining decisions.

One solution: Building translation capacity

Closing the AI leadership gap doesn’t require every executive to develop AI expertise. Instead, they can establish a new bridge role: an “AI translator” who can help leaders turn business problems into AI-ready use cases and translate technical possibilities into clear business choices.

This person should sit close to the C-suite, be comfortable discussing P&L, risk, and strategy, while speaking the language of AI experts, data scientists, and engineers. They help boards and leaders understand what’s realistic, prioritise where to start, and ensure governance, training, and ethics are embedded from day one. They also recognise AI initiatives for what they really are: a culture shift, not an IT deployment.

Some organisations will hire for this capability. Others may develop someone who already has leadership’s trust, then invest in building their AI literacy. The goal is giving senior leaders a trusted advisor who can translate fast-moving technology into disciplined action.

The AI leadership gap is real, but not permanent. With structured learning and the right translation roles, the C-suite can move from feeling pressured and reactive to feeling informed and in control. The companies that do this well won’t just “do AI.” They’ll quickly develop a team capable of successful deployment.


Wendy Lynch, Ph.D. is Founder of Lynch Consulting