Catherine Dock explores how L&D can demonstrate its value and measure impact in fast-moving, complex technology environments. From stakeholder engagement to meaningful metrics, she outlines practical strategies for aligning L&D with business outcomes—and why communities, clarity, and curiosity are key to making an impact that stakeholders notice and trust.
One of the hardest things in Learning and Development is to justify the real impact of learning on organisations and strategies to stakeholders. We know how powerful showing the return on investment (ROI) can be for arguing the case for using learning programmes and interventions to drive better outcomes. It’s also powerful for understanding the benefits of those interventions in real terms to drive change and continuous improvement.
Showing ROI through learning interventions and outcomes can be challenging in the technology space due to the speed of change and complexities of the industry sector
Whilst Kirkpatrick’s four levels of training evaluation (1959) include behavioural change (level 3) and organisational and individual results (level 4), showing ROI through learning interventions and outcomes can be challenging in the technology space due to the speed of change and complexities of the industry sector. It’s not always obvious how to link the two effectively.
Why is technology such a challenging area to measure?
Technology has several challenges for L&D professionals:
- Identifying the best technology stakeholders to work with, especially in larger, matrix-based organisations
- Navigating a diverse set of technical specialisms to measure against
- Picking out the most appropriate and impactful measurements within each of these specialisms to show a tangible correlation between learning interventions and programmes, and improvements in technology outcomes
- Working out how to measure learning impact most effectively
- How to harness role of GenAI
- How to work in an environment of moving goalposts that are the Volatile, Uncertain, Complex and Ambiguous (VUCA) elements of an ever-changing sector where innovation and the pace of change can be both a blessing and a curse
I’ll talk about each of these in turn and how L&D professionals can address these challenges.
Identifying technology objectives and stakeholder partners
It’s important at the beginning of the journey to understand “What are we measuring for?” To do this, partnering with Subject Matter Experts (SMEs) who understand their technical area is critical to appreciate that question. Most organisations will have a set of technology objectives. These could be quite high level –Objective Key Results (OKRs) at the top level, then cascaded down into more specific Key Performance Indicators (KPIs) for individual specialisms or teams, and there will be metrics by which to measure these KPIs.
Conversations early on in a project or programme lifecycle are also useful to understand which outcomes are important for learners and organisations. This could be towards short term goals for a specific module, product or platform to be delivered more effectively, with less down time and higher customer satisfaction; or longer-term outcomes for future-proofing skills for a longer period. Ideally, L&D should be involved early on during the strategy and planning phases of technology organisations’ programmes of work to get a proper understanding of the expected outcomes to be measured.
How do we find these stakeholders when there may be multiple SMEs with similar titles in larger organisations? Some will be contextual to the specific part of the organisation being measured. In wider-impact programmes, creating academies of learning by topic (eg Cloud) and linking these to communities of practice (CoPs) or Special Interest Groups (SIGs), can act as a honeypot to bring expertise to the table in a semi-structured way.
Providing a conduit of social learning through shared interest has been cited as good practice many times. This includes the “20” part of 70:20:10 (McCall, Lombardo, Eichinger, 1996) and Social Leadership (Stodd, 2014) to drive community and collaboration as a ‘tone from the top’. Technology leaders should already have an indication of who their SMEs are. With support from L&D organisations, these individuals and groups can develop supportive environments for social learning that are greater then the sum of their parts.
Navigating the different technical specialisms
Understanding the types of technology KPIs for different specialisms makes it easier to develop the metrics to measure success.
The following is not an exhaustive list, but some common technology areas and examples of KPIs that may be applied as part of objective setting for learning outcomes:
Digital transformation – Measuring the financial impact of digital technology implementations
KPI examples:
- Employee productivity
- Adoption and performance metrics
- Reliability and availability metrics
IT Service Metrics – Measuring efficiency, cost-effectiveness, and overall performance of IT
KPI examples:
- Customer ticket volume
- Ticket Resolution Time
- Unsolved tickets / employee
- Tickets reopened
Cybersecurity – Monitoring security posture, incident response, and vendor risk management
KPI examples:
- Preparedness level
- Security awareness level
- Phishing attacks repelled
- Known vulnerabilities
Software development – Balancing agility, quality, and customer value within development processes
KPI examples:
- Code quality
- Developer to QA churn rate
- Code cleanliness
- Service uptime
Testing/QA – Verification and validation of code and builds
KPI examples:
- Test coverage
- Defect metrics
- Test efficiency and effectiveness
- Test process eg test execution, % of automated tests
- Quality Assurance pass rates
DevOps & DORA Metrics – How well do your DevOps initiatives work
KPI examples:
- Defect escape route
- Flow metrics
- CI runs per day
How to define what measurements work for each specialism
The above measurements are not a “one size fits all” solution, they are typical examples for a selection of technology areas. Learning professionals should be mindful that the KPIs and their metrics should be fit for purpose for that particular specialism.
By having honest and open conversations with stakeholders, particularly technical stakeholders, as early in the process as possible, learning professionals can start to understand what’s most specific to their organisation and how to combine more traditional learning metrics with technology-specific examples. Bringing in a community of practice (CoP) or Special Interest Group (SIG) to verify which KPIs are most effective to measure change for their particular area is a great way to engage those impacted directly by the learning and associated outcomes.
Using recognisable language to technologists to build context and mutual understanding is also helpful. Whilst learning professionals, especially those working across large functional areas, wouldn’t be expected to know every specific terminology, a broad understanding of some basics for that area is useful, so it’s worth learning key concepts and terminology.
The use of AI for generating measurable learning outcomes
It would be foolish to ignore the potential role of GenAI in generating suggestions for learning-related KPIs and indeed, due to the nature of the types of output involved, it lends itself well.
An example prompt would be: “Generate some examples of technology KPIs for software testing and then, how those could be applied to Kirkpatrick levels 3 and 4”.
I used that prompt to generate the examples below, along with my own background in software testing, to subsequently validate those outputs as appropriate and accurate. Generating examples of KPIs can be used as a springboard for discussions with SMEs and other relevant stakeholders in the suitability of each measure.
Measuring technology learning impact
Using examples from the testing world, let’s look at how these KPIs can be used in application against Kirkpatrick levels 3 (behaviour) and 4 (results).
Level 3: Behavioural change
1) Continuous learning application
- KPI: Implementation rate of new testing techniques
- Measurement: Number of new testing approaches applied within 30 days of training
- Target: Each team member applies one new technique quarterly
2) Shift-left testing implementation
- KPI: Defect detection phase distribution (requirements, design, coding, testing)
- Measurement: Track when defects are found in the development lifecycle
- Target: 70% of defects identified before formal testing phase
So for these, we can see how the targets bring the KPIs to life. These are verifying that the testers have learned the right techniques to improve outcomes in the first example, and then validating they are using the techniques correctly in the second. In the testing world, verification – ‘did we build the right system’ and validation – ‘did we build the system right’ are fundamental concepts, so these KPIs are very relatable to testing leaders.
Level 4: Business results
1) Reduced time-to-market
- KPI: Release cycle duration reduction
- Measurement: Compare release timeframes before/after testing improvements
- Business Impact: Faster revenue generation and market responsiveness
- Calculation: (Previous release time – Current release time) × Value of time saved
2) Cost of quality reduction
- KPI: Cost of fixing defects (early vs. late detection)
- Measurement: Calculate total remediation costs compared to previous periods
- Business Impact: Reduced rework costs, support costs, and technical debt
- Calculation: (Previous remediation costs – Current costs) / Previous costs × 100%
For these level 4 examples, we see how the KPIs are scaled to a product or programme-level outcomes, for example, the impact of the shift-left testing being measured in level 3 and what that means on a larger scale. Again, these metrics can also be combined with more traditional learning evaluation tools to gauge the human element of the training on those results.
How to deal with the VUCA environment and changing priorities
The current pace of change in testing organisations can make it challenging for learning professionals to keep pace. A pragmatic approach is to agree a regular review cycle for the learning KPIs and metrics with key stakeholders, that factor in potential changes in objectives, or learning requirements for end users. These could be quarterly, bi-annually or yearly, but these shouldn’t change too often. It’s also important to keep a track of what’s changed, as this can also indicate areas of wider concern.
Using Continuous Service Improvement (CSI) methods, similar to those used in ITIL, and applying these to the strategic learning design and measurement, can also be a good way to match the methodology and to a great extent, the language of Technology Service Management. This also ensures a continuous feedback loop from learners and regular inputs from key stakeholders to ensure learning measurement is timely and relevant.
Understanding and engaging
The challenges of generating learning impact KPIs and metrics to measure the behavioural and business changes for technology can be mitigated by understanding what’s important to key stakeholders for their specific area and then applying a learning lens to these.
Engaging SMEs and stakeholders in these conversations is key to ensuring measurements are fit for purpose. Using CoPs and SIGs and developing learning academies linked to these democratises the process across the knowledge base. Using CSI techniques to regularly assess and refine the measurements allows for some flexibility and change, which is needed in a VUCA environment.
Catherine Dock is an independent strategic L&D consultant.