Rushing AI into workflows can produce polished ‘workslop’ that masks shallow thinking, wastes time and erodes trust. Jenna Tiffany sets out a human-centred antidote: start with purpose, define boundaries, train people and tools, make human review non-negotiable, and reward outcomes over output so organisations keep judgment, culture and quality intact.
In today’s workplace, many organisations are effectively handing their keys to a stranger by deploying artificial intelligence (AI) tools without a clear strategy. In doing so, they may believe they’re accelerating work, but in fact they’re enabling a new problem: what we might call “workslop”. That’s the polished‑looking output that gives everyone a short-lived feeling of progress, yet doesn’t meaningfully advance the task.
It’s a wake‑up call for how we lead, manage and structure work in the age of AI
A recent study published by the Harvard Business Review (HBR) in collaboration with the Stanford Social Media Lab and BetterUp Labs shows that more than 40% of full‐time US employees reported receiving AI‑generated work that “masquerades as good work but lacks the substance to advance a given task meaningfully.”
This is far more than a productivity issue: it’s a wake‑up call for how we lead, manage and structure work in the age of AI. Let’s explore how this phenomenon arises, what it means for workplaces, and how a human‑centred strategy can help organisations avoid the rise of workslop.
What is “workslop” and why does it matter?
The term workslop describes output created with generative AI tools that appears credible and well‑formatted but is shallow, incomplete, or lacks the context required to deliver real impact. According to the HBR article: “AI‑generated work content that masquerades as good work, but lacks the substance to advance a given task meaningfully.”
In practice, this might look like: a slick PowerPoint deck generated by an AI tool that lacks strategic insight; a one‑page summary that misses key risks; an email that is grammatically correct but tone‑deaf or misaligned with brand values; or even a piece of code that runs but doesn’t address the real business need.
Why does it matter? Because rather than saving time, workslop shifts the burden: it forces downstream colleagues or stakeholders to interpret, correct, or redo the work. In the HBR‑Stanford study, recipients of workslop estimated that they spent, on average, 1 hour 56 minutes correcting each incident.
The hidden cost isn’t just time: it’s frustration, loss of trust, and erosion of work culture. From a people perspective, this is critical: the promise of AI is often quicker, leaner, and more innovative work. But when it slips into being ‘good enough but misaligned’, it undermines quality, places extra burden on people and erodes the value of expertise and skill.
In short, AI doesn’t inherently make you stronger; without strategy, it magnifies your gaps and makes them visible.
What is driving the rise of workslop?
The problem does not lie with AI; the technology is powerful, but the issue lies in how organisations adopt and deploy it. Key drivers of workslop include:
- Hasty adoption without guardrails
Many teams rush to experiment with generative tools (large language models, content generators, automation platforms) without clarity on purpose, quality standards, oversight, or training. Without these, the tools become “auto‑pilot” and generate output without alignment to the organisational goals. - Over‑reliance on tools to replace thinking
When generative AI is treated as the decision‑maker, rather than the assistant, you risk letting AI steer the work, rather than human strategy. One public example is a major professional services firm produced a government report with AI‑generated hallucinations and inaccuracies, and ended up refunding part of its fee. The cautionary tale: letting AI run without necessary oversight is costly. - Lack of human review and accountability
Shockingly, one insight suggests that 66% of AI‑generated content is not reviewed before use. When human review is absent, content is polished in appearance but weak in substance. - Culture rewards speed over substance
Workslop flourishes in environments where the output is valued more than outcome. The speed enabled by AI becomes the trap: teams produce more, but the work is shallow. - Tool misuse across tasks where human judgment matters
Just because something can be automated doesn’t mean it should. Tasks involving empathy, brand‑sensitive messaging, crisis communication, cultural nuance or strategic decisions are poorly suited to being handed over to AI alone.
For organisations that believe the narrative of AI delivering productivity gains instantly, the workslop phenomenon is a sharp reminder: the tools alone don’t deliver results, the way they’re used does.
Why a human‑centred strategy is the antidote
If workslop is the symptom, then the remedy lies in a human‑centred strategy. By this, I mean a strategy that puts people, their judgment, values, context and oversight at the heart of AI deployment, rather than letting AI drive the agenda.
Here are the core elements of a human‑centred strategy:
1. Start with strategy, always
AI is a tool; strategy is the map. Without strategic direction, even the best prompt is just noise. Before deploying AI, organisations need to ask:
- What is the business objective?
- What insight or outcome are we trying to unlock?
- How will the output help us move forward?
If you cannot answer these questions, then handing over work to AI is like giving a stranger the car keys without a destination in mind. The human‑centred approach says: we define the destination, the guardrails, and the alignment, and then use AI to help us get there faster.
2. Define AI’s role and set boundaries
Not everything should or can be automated. Remember that AI is best used where it augments human effort (summarising data, generating ideas, drafting outlines) rather than substituting for critical judgment (final framing, voice, tone, decisions).
Organisations should decide in advance: What’s off‑limits to AI? Examples might include:
- Empathy‑driven communication (e.g., careers/human‑resourcing conversations)
- Crisis management
- Brand‑sensitive messaging
- Cultural nuance or high‑stakes decisions
Training, supervision and limits are essential. Without them, AI’s power becomes a risk.
3. Human review must be non‑negotiable
One of the most dangerous assumptions is that AI = done. But many organisations find that AI outputs still require human in‑the‑loop review.
Human review means:
- Someone owns the output and the business result
- You check whether the output meets the strategic intent, aligns with tone, voice, brand and values
- You audit logic, factual accuracy, biases and deliverable‑fit
- You give permission and time for review, not skip it in the name of speed
For HR/L&D teams, this means integrating the review process into workflows rather than treating it as optional.
4. Train your people and your AI
Effective AI use is a learned skill. It isn’t plug‑and‑play. Teams need training on prompt design, context‑setting, fact‑checking, and ethical risks. Equally, your AI systems must evolve in the context of your organisation: you refine them, embed your business context, values, and brand voice.
Simply giving everyone access to a generative model and telling them “go” is a recipe for workslop. The human‑centred approach invests in the capability of the people and the tool together.
5. Create a culture that prioritises substance over speed
Actual value comes from clarity, depth, relevance, not just volume of output. Workslop thrives when teams believe quantity equals productivity. To counter this:
- Measure progress by outcomes (what changed?), not just output volume (how many slides?).
- Reward clarity, strategic insight, and substance over “look‑like‑we‑did‑something”
- Encourage teams to think: “Is this aligned with what we set out to achieve?” rather than the mentality of we got this done quickly
For L&D professionals, this might mean shifting KPIs away from how many courses were created to how many participants applied the learning or how many teams improved their decision‑making.
Practical steps for people leaders
Here’s how you might translate these strategic principles into action in your organisation:
- Conduct an audit of current AI use: Where are generative tools being used? For what tasks? What review process exists?
- Map the AI journey: Define which tasks are suitable for AI‑assisted work, and which remain human‑only (or human‑led)
- Develop a clear internal policy for AI use: Set out how AI should be used in content, communications, data analysis, learning design, and HR operations
- Launch training sessions: Cover prompt design, evaluation of AI output, bias awareness, brand alignment, and review workflows
- Embed review checkpoints: For example, any AI‑generated content must pass through a human judge who checks alignment, accuracy and impact
- Shift measurement: Introduce metrics such as the % of AI‑generated content that passes review first time, time saved vs time spent correcting, learner/employee satisfaction with output
- Foster reflective culture: Encourage teams to share what worked, what didn’t in AI‑assisted work, to build collective learning and reduce the risk of hidden costs
Why this really matters now and becomes a competitive advantage
As AI becomes more engrained, the risk isn’t replacement of humans; it’s dilution of human judgment, insight and strategic thinking. Organisations that treat AI as a magic bullet will likely experience the illusion of progress but not real progress.
By contrast, a human‑centred strategy enables you to protect the integrity of your work, safeguard brand and culture, and ensure teams stay focused on outcomes rather than just output. In other words: strategy, clarity and context remain human‑led; the organisation holds the keys to the car.
In many sectors, the difference between doing AI‑right and AI‑poorly will separate the high‑performers from the rest. Choosing speed over substance is an easy trap; resisting it is the route to sustainable value.
The right question
The question organisations should be asking is not simply “‘How do we use AI?’” but rather: “How do we use AI without sacrificing our values, quality or direction?”
The answer lies in:
- Starting with strategy
- Defining boundaries
- Training teams and tools
- Incorporating human oversight
- Staying focused on outcomes rather than volume
AI will continue to reshape how work is done. But how it does so is up to us. With the right human‑centred strategy, we can ensure AI becomes a capable assistant, not the driver of the vehicle. Let’s not hand our keys to a stranger. Let’s stay firmly in the driver’s seat.
Jenna Tiffany is Founder and Strategy Director of Let’sTalk Strategy

