Apprenticeships were designed to prioritise applied capability, yet assessment frameworks can still exclude individuals who can perform the role itself. And exclusion means less people getting what they need. Michelle Carson examines how this misalignment is narrowing workforce pipelines and where L&D leaders have the greatest leverage to change it.
Apprenticeships were designed as an alternative to academic routes, a practical pathway into skilled work for those whose strengths are best demonstrated through doing. Yet in practice, many apprenticeship frameworks are excluding precisely the individuals they should be opening doors for. This exclusion is not incidental. It is a direct consequence of how learning, competence, and readiness are defined, delivered, and assessed.
When the apprenticeship pipeline filters out capable individuals early, the consequences extend well beyond individual outcomes
This matters because apprenticeships are now one of the few structured routes employers have for developing future capability. They are a pipeline into technical roles, into management, and into leadership. When the apprenticeship pipeline filters out capable individuals early, the consequences extend well beyond individual outcomes. They shape workforce resilience, skills shortages, and long-term organisational capacity.
The Apprenticeship Levy was introduced to help close the UK’s skills gaps and widen access to opportunity. It has done some good. But it has also locked in processes that can mistake compliance for capability, and qualification completion for readiness. The result is a system that often presents itself as opportunity, while reproducing exclusion in practice.
When I previously wrote about SEN inequity I focused on how structural failures in education compound across a lifetime. This is not a failure of educators, but the predictable outcome of rising SEND demand, constrained real-terms funding, growing EHCP volumes without matching specialist provision, and the erosion of early intervention.
Apprenticeships are often positioned as a corrective to those earlier failures. We should be honest about when they are not.
A system that confuses learning with capability
Consider an applicant for an engineering apprenticeship who shows strong practical aptitude and an exceptional ability to diagnose faults in complex systems. In real working environments, these strengths are immediately visible. Problems are identified quickly. Patterns others miss are recognised. Learning happens through immersion and problem-solving.
Yet as part of the apprenticeship, the same individual may be required to complete extended classroom-based modules on theory, delivered in fixed three-hour blocks, alongside written assignments submitted to rigid deadlines. In this setting, assessment drifts away from measuring engineering competence and towards testing endurance within a particular learning format. The issue is not understanding. It is misalignment between what the role requires and what the framework rewards.
In practice, assessment can prevent capable individuals from completing the apprenticeship standard, even when they can perform the role itself.
The double disadvantage
Policy emphasis has increasingly focused on skills-first hiring, and many employers now question whether traditional academic credentials reliably predict performance. Apprenticeships were meant to provide a credible alternative, prioritising applied competence over classroom success. Yet many standards have reintroduced academic-style requirements as stand-ins for capability: time-based learning rules, written assessment as the default, and progression models that assume one narrow learning trajectory.
For individuals who did not thrive in formal education, this creates a circular barrier. Excluded from academic pathways early on, they encounter the same constraints again inside vocational routes that were supposed to work differently.
Neurodivergent candidates are particularly exposed to this dynamic, not because they lack capability, but because their strengths are often misaligned with how learning and assessment are structured. Many bring precisely the capabilities organisations consistently say they need but struggle to find: horizon scanning, complex problem-solving, systems integration, and lateral and strategic pattern recognition across large volumes of information, alongside the ability to operate effectively in real-world, unstructured environments.
Autistic individuals provide a particularly clear illustration of this wider system failure. The Buckland Review highlights that only around three in ten working-age autistic individuals are in paid employment, despite most wanting to work. This is not primarily a question of willingness or motivation. It is a question of placement, progression, and integration. Qualification frameworks play a material role in determining who gets through those gateways and who does not, even within routes explicitly designed to widen access.
The City & Guilds Neurodiversity Index 2025 shows that many employers cite lack of knowledge as a barrier to inclusion. Knowledge gaps can be addressed. Structural barriers embedded in national frameworks require something more fundamental: a rethink of what we are choosing to measure.
Rethinking assessment, not ability
We should be sceptical about whether hours logged, or formats completed tell us anything meaningful about competence. Take an apprentice engineer who can rebuild a gearbox to tolerances exceeding industry standards. If success hinges on producing a written explanation of mechanical principles in a prescribed format by a fixed deadline, what is really being assessed?
In many cases, it is not mastery of the role, but the ability to translate tacit knowledge into academic-style output under artificial constraints. We have moved beyond simplistic ideas about learning styles, yet assessment design still assumes there is only one acceptable way to demonstrate understanding.
Outcome-led competency frameworks ask a simpler question: can this individual perform the role to the required standard? Time-based frameworks answer a different one entirely.
Why this is a leadership and business issue
Exclusion is not only a social problem. It is a workforce and succession risk. Apprenticeships are one of the few structured pipelines organisations have for developing future specialists, managers, and leaders. Filtering capable individuals out at qualification stage narrows that pipeline long before leadership potential can be identified or developed. It also compounds skills shortages by removing exactly the individuals who could have grown into hard-to-fill roles.
For employers investing through the Levy, this represents a significant misallocation of resource. Frameworks are not neutral in their effects. They embed assumptions about how individuals process information, manage attention, and demonstrate competence. Each ‘standard’ requirement becomes a potential exclusion point, and therefore a point at which investment fails to translate into capability.
What can L&D do within existing constraints?
L&D teams cannot rewrite national standards, reshape funding models, or resolve the structural failures that appear earlier in the education system. Nor should they be expected to. Much of what drives exclusion in apprenticeships sits upstream, embedded in policy decisions and qualification design choices that sit well beyond the remit of any single organisation.
What L&D leaders do have, however, is influence at a critical translation point: where national frameworks are turned into lived learning experiences, assessment practice, and progression decisions. That makes their role unusually powerful.
Within those constraints, three changes make a material difference:
- Offer more than one way to evidence competence. If the outcome is understanding safety protocols, allow demonstration, verbal explanation, or written submission. The standard is the outcome, not the format
- Separate learning from credentialing. Build practical skill and context first, and introduce formal assessment once competence is established, rather than testing both simultaneously
- Design for cognitive flexibility. Extended deadlines, chunked submissions, and alternative assessment environments reduce false negatives by measuring understanding rather than processing speed
Across schools, colleges, training providers, employers, and policymakers, responsibility for inclusion is distributed, even if accountability is often blurred. L&D leaders sit at the intersection of these systems, with a unique ability to see where design assumptions break down in practice.
Using all of your talent options
The individuals filtered out by current frameworks are not edge cases. They represent a significant proportion of the capability organisations say they cannot find. When assessment design changes, progression changes. When progression changes, pipelines widen. And when pipelines widen, skills shortages ease not through rhetoric, but through better use of the talent already available.
If apprenticeships are genuinely about building skills for the economy, then alignment between assessment and real-world performance is not a technical detail. It is the point at which opportunity either becomes real or quietly disappears.
L&D leaders cannot solve this alone. But they are among the few actors with the proximity, insight, and credibility to change how the system operates where it matters most.
Michelle Carson is Chairwoman and Founder of Holmes Noble

