Our approach to impact evaluation
How we build a stronger understanding of how youth employment programmes work, who for, how change happens, and the conditions that enable it
We build evidence to understand if support provided genuinely makes a difference to young people, why, how and whether it would work again.
We commission impact evaluations of youth employment programmes to understand “what works” for young people, particularly those facing the greatest barriers to good work.
This is why every impact evaluation we commission includes a theory-informed implementation and process evaluation (TIIPE), informed by our Theory-informed IPE guidance.
Our evaluations influence decisions made by policymakers and funders and increase the pipeline of effective practice being delivered by employers and support organisations to help young people furthest from the labour market to find and keep good work.
Understanding programmes and services that support young people towards and into good work.
Understanding how employers can successfully recruit, support and retain young people.
Understanding places and systems, local policies and conditions affecting youth employment.
We know that employers and delivery organisations vary in their capacity to engage with robust impact evaluations. To give us the best chance of generating clear, interpretable findings, we identify and help create promising programmes and work towards full-scale evaluations. This means we can use our resources strategically to generate the strongest possible evidence in support of our mission.
We ask a series of exploratory questions about the programme, how it’s believed to work, and how it might be evaluated.
Where appropriate, we commission structured feasibility studies to explore these questions in partnership with delivery organisations and independent evaluators.
If the feasibility stage is promising, we may support a pilot evaluation.
This helps us understand whether the proposed design works in practice and helps us identify and address risks that may undermine a larger trial – such as weak referral flows, inconsistent delivery, or data loss.
A successful pilot gives us the confidence to move forward and invest resources in a full impact evaluation.
When we are confident in both the programme and the evaluation design, we may fund a full-scale impact evaluation to understand effectiveness.
Where feasible and appropriate, we prioritise Randomised Controlled Trials (RCTs) as the strongest method for generating causal evidence.
These involve creating two ‘equal’ groups of young people through random allocation. One group receives the programme or intervention and the other receives business as usual support. The outcomes of both groups are then compared.
In contexts where randomisation is not viable, we may support Quasi-Experimental Designs (QEDs), where comparison groups are created through other means, such as using administrative data, so that we can still generate credible comparisons.
Some of our evaluation work, such as Connected Futures, focuses on understanding and supporting change across whole systems rather than testing a single programme. In these contexts, the evaluation journey from feasibility to randomised trials may not always be appropriate or sufficient. Instead, we build an evidence base about how systems operate, adapt and improve over time using a mix of:
We see these approaches as complementary to our work to produce impact evaluations of specific interventions.