Fahed Bizzari argues most organisations are just drifting into AI use, creating activity without dependable performance. He outlines three common patterns: waiting, rolling tools out, and mandating use, all of which fuel shadow AI: uneven quality and rework. He shows how L&D can build role-based capability, checking habits and accountability.
Most organisations are already living with AI at work. People use it to draft, summarise, rewrite and plan. Some outputs are good. Most are just fast. A lot is quietly risky. But most of it stays unspoken. The reason for this is that most organisations are drifting into AI. That drift tends to fall into three patterns that look different but produce the same outcome: activity without dependable performance.
Pattern 1: “We’re waiting”
The first pattern is deferral. Leaders wait for regulation, clearer ROI or a safer tool. Teams do what teams always do under pressure and use what helps. Sometimes it is approved, often it is not. People also stop talking about it because they do not want a debate every time they try to move faster.
More than half of generative AI adopters were using unapproved tools at work
That is how shadow AI forms: unapproved tools, workarounds and undisclosed use because the safe route is unclear or slow. Salesforce research surveying more than 14,000 workers found that more than half of generative AI adopters were using unapproved tools at work.
Deferral can feel like safety because it avoids a hard decision. In reality, it often reduces visibility just as usage spreads. If people do not feel able to say, “I used AI here”, leaders lose the chance to guide practice and catch mistakes early.
The goal is not to eliminate AI use. You cannot. The goal is to stop forcing it underground by making the official route workable, clear and predictable.
Pattern 2: “We’ve rolled it out”
The second pattern looks more mature. Licences are bought. Access is set. Guidance is published. Usage is tracked. None of that is wrong. It is just incomplete. Tools spread quickly. Good practice spreads slowly.
If teams do not share standards for what good looks like, results will vary. One team builds confidence. Another gets plausible nonsense and loses trust. Managers end up rewriting work because they cannot rely on what they receive. The organisation argues about the tool when the real issue is the way work is done and checked.
This is where measurement goes wrong. Usage numbers are tidy, but they do not tell you whether quality is improving, whether rework is falling or whether accountability is clear.
You can see the same logic in higher-stakes domains. In the UK, the Financial Reporting Council has published guidance on AI in audit that foregrounds judgement, governance and quality control rather than treating adoption as the finish line. The domain is not the point. The principle is. When outcomes matter, “we deployed the tool” is not a sufficient story.
Pattern 3: “Everyone must use it”
The third pattern is mandate-led adoption. Leaders set targets and ask for proof of use. Mandates create movement fast. They also create performance fast. When skill is uneven, people optimise for looking compliant. They use AI so they can say they did. But the habits that protect quality do not develop at the same speed.
The cost shows up later as a quiet tax: rewrites, corrections, longer review cycles and small errors that slip through. You hear, “It is quicker to get a draft, but I spend ages fixing it”.
This is often described as workslop: output that looks fluent but is not reliable enough to use without repair. Zapier’s research on AI workslop reports many employees spending hours each week revising AI outputs and links weak training to worse outcomes.
What a human-capable approach looks like
If those are the traps, the alternative is simpler than it sounds. A human-capable approach treats AI as a professional capability. Something people need to be able to do well in their real roles, on real work, under pressure. Then it is embedded into how work is run so it holds on busy days.
In practice, this usually means people learn role relevant ways to use AI, teams agree what good looks like for common tasks, checking becomes normal where stakes are higher and accountability stays human.
This aligns with what senior leaders report when asked what blocks progress. IBM’s CEO study messaging has pointed to workforce and culture challenges, alongside governance, as leaders race to scale generative AI. In other words, most organisations do not stall because they failed to buy the right tool. They stall because they have not changed how work is done and reinforced.
Where L&D fits
A fair challenge from L&D is: isn’t this IT or risk? Yes and no. L&D should not own tool selection or sign off risk. But L&D does own capability. And capability is what determines whether rules hold in real work.
That is where L&D has leverage. Not through generic AI training, but by making good practice executable inside workflows. Practise on real tasks. Show examples of good work. Coach managers to reinforce checking habits. Create feedback loops so good practice spreads and weak practice gets corrected early.
Boston Consulting Group has argued that companies need to go beyond AI adoption to realise full value, with emphasis on workflow redesign and people transformation, including training and change management.
Here is a simple test to keep this honest. If you cannot point to one routine piece of work where AI use is taught, coached, checked and reinforced differently than before, then you have not built AI capability yet, no matter how many tools you rolled out or courses you ran.
Fahed Bizzari is Managing Partner of Bellamy Alden AI Consulting

