We build evidence to understand if support provided genuinely makes a difference to young people, why, how and whether it would work again.

Finding and generating high-quality evidence

As the What Works centre for youth employment, we want to:
  • better understand England’s youth unemployment and inactivity challenge
  • learn what solutions work to address this and how
  • share insights to inform decision making.

We commission impact evaluations of youth employment programmes to understand “what works” for young people, particularly those facing the greatest barriers to good work.

What is an impact evaluation?

Read more about What is an impact evaluation?
An impact evaluation tells us whether a programme, policy or intervention genuinely makes a difference.

It provides information on changes by assessing a programme’s effects on specific outcomes. It can tell us if a programme has a positive or negative effect, and the size of that change.

Impact evaluation also helps identify the causes of these changes.

We also want to know why and how a programme worked, for whom and in what circumstances, and whether it will work again.

This is why every impact evaluation we commission includes a theory-informed implementation and process evaluation (TIIPE), informed by our Theory-informed IPE guidance.

What is theory-informed implementation and process evaluation (TIIPE)?

Read more about What is theory-informed implementation and process evaluation (TIIPE)?
TIIPE is used to strengthen understanding of how programmes operate and why outcomes are achieved. This includes understanding:
  • what was delivered
  • how change was expected to occur
  • how delivery and context influenced outcomes.

In practice, this means paying close attention to how programmes are implemented, taken up, adapted, and sustained in real settings – not just whether outcomes change on average.

It helps to:

  • interpret impact evaluation findings
  • explain variation across groups and settings
  • identify conditions for success and where programmes need to adapt
  • inform decisions about improvement, further testing or scaling.

We use TIIPE across evaluation stages, to support programme design, testing, and decision-making.

Before impact evaluation, TIIPE is used to:

  • assess whether a programme is ready for impact evaluation
  • refine programme design and delivery approaches
  • identify what should be tested

During and after impact evaluation, TIIPE is used to:

  • support interpretation of findings
  • understand how and why outcomes were achieved (or not)
  • inform decisions about sustainability and scaling

Equity is embedded in the design, delivery and reporting of TIIPE.

Read more about TIIPE in our blog what it takes to make youth employment programmes work.

Our evaluations influence decisions made by policymakers and funders and increase the pipeline of effective practice being delivered by employers and support organisations to help young people furthest from the labour market to find and keep good work.

Our evaluations focus on three key areas of youth employment:

Understanding places and systems, local policies and conditions affecting youth employment.

Taking a staged approach

We know that employers and delivery organisations vary in their capacity to engage with robust impact evaluations. To give us the best chance of generating clear, interpretable findings, we identify and help create promising programmes and work towards full-scale evaluations. This means we can use our resources strategically to generate the strongest possible evidence in support of our mission.

Stage one: Feasibility

Can an impact evaluation be done and would it be meaningful? Read more about Stage one: Feasibility

We ask a series of exploratory questions about the programme, how it’s believed to work, and how it might be evaluated.

Where appropriate, we commission structured feasibility studies to explore these questions in partnership with delivery organisations and independent evaluators.

Stage two: Piloting

Can we test the evaluation design before scaling it? Read more about Stage two: Piloting

If the feasibility stage is promising, we may support a pilot evaluation.

This helps us understand whether the proposed design works in practice and helps us identify and address risks that may undermine a larger trial – such as weak referral flows, inconsistent delivery, or data loss.

A successful pilot gives us the confidence to move forward and invest resources in a full impact evaluation.

Stage three: Full-scale impact evaluation

What difference does the programme make? Read more about Stage three: Full-scale impact evaluation

When we are confident in both the programme and the evaluation design, we may fund a full-scale impact evaluation to understand effectiveness.

Where feasible and appropriate, we prioritise Randomised Controlled Trials (RCTs) as the strongest method for generating causal evidence.

These involve creating two ‘equal’ groups of young people through random allocation. One group receives the programme or intervention and the other receives business as usual support. The outcomes of both groups are then compared.

In contexts where randomisation is not viable, we may support Quasi-Experimental Designs (QEDs), where comparison groups are created through other means, such as using administrative data, so that we can still generate credible comparisons.

Beyond linear pipelines: Connected Futures and systems change

Some of our evaluation work, such as Connected Futures, focuses on understanding and supporting change across whole systems rather than testing a single programme. In these contexts, the evaluation journey from feasibility to randomised trials may not always be appropriate or sufficient. Instead, we build an evidence base about how systems operate, adapt and improve over time using a mix of:

  • theory‑based evaluation
  • implementation and learning partnerships
  • targeted quantitative analysis.

We see these approaches as complementary to our work to produce impact evaluations of specific interventions.