Unpacking complex programmes in advance of an impact evaluation

Oct 4, 2023

Jane Lewis is Managing Director UK at the Centre for Evidence and Implementation (CEI), a global evidence intermediary. She is a member of Youth Futures Foundation’s Evaluation Expert Advisory Group (EEAG). Below, she explores the importance of understanding a programme’s goals, activities, and delivery context in advance of conducting an impact evaluation. 

 

Unpacking complex programmes 

One of the most important aspects of doing good evaluation is the developmental work and collaboration with practitioners that precedes the evaluation itself. Clarifying the programme’s goals and activities and developing or refining a theory of change is often key here. In evaluations involving several delivery organisations, co-designing a shared practice model is also an important stage. Working together to design an impact evaluation that fits with the programme’s aims and delivery context is of central importance, and this often involves supporting staff who might have had little or no experience of randomised controlled trials (RCTs) or quasi-experimental designs (QEDs). At the Centre for Evidence and Implementation, we’ve been doing this work recently with programmes and organisations in areas including youth work, mentoring and foster care support, as well as in Youth Futures Foundation’s area of youth employment.  

Each time I’m struck again by the importance of this work, and by how enlightening it is. 

The programmes we work with are often at the most complex end of complex. Disadvantage is held in place by entrenched systemic forces, so programmes and services often involve multi-layered approaches. For example, in addition to working with young people, they might involve work with families and peers or work aimed at changing school and community attitudes and behaviours. And they’re usually operating within complex local systems of services and support. All of this makes developmental work particularly important.  

 

What the work involves 

There’s a lot happening when we’re doing this developmental work. We’re getting delivery teams to be specific and explicit about the What, Who, Why and How of the programme, and pinning down the internal and external systems factors that will impact on it. We’re also identifying the ‘mechanisms of change’. I’m not much of a fan of that engineering language for the very human processes these programmes involve, but this means the ‘active ingredients’ or the theory-based reasons that explain why change happens for individual participants. For example, in a mentoring programme, the underlying theory might be that mentoring relationships reframe young people’s negative beliefs and self-identity and stimulate positive responses to challenges, while in a parenting programme, it might be that adopting positive parenting beliefs and behaviours changes responses to children’s challenging behaviours.  

This developmental work requires multiple perspectives, and an eclectic approach to evidence. It integrates practitioners’ insights and young people’s experiences; learning from case studies, ethnographic research, and published evidence; and programme and implementation theory. It often reveals differences in assumptions among team-members, and it can reveal thin or contrary evidence for some strongly-held views. This can be unsettling, but it also leads to important insights. We’ve seen it lead to decisions to beef up a programme component that wasn’t previously seen as central, to drop a component or an aim, or to initiate work with a partner now recognised as a crucial part of the wider system. It’s also key for reaching agreement on the changes that an evaluation should focus on, and when they can realistically be measured.  

As part of this process, we’re also trying to understand what it takes to implement the programme well. If we know what good implementation looks like, and what it will take to get there, then we can design a more robust, theory-led implementation evaluation that will provide essential learning for interpreting impact findings and knowing how to scale what works. 

 

Designing impact evaluations that won’t disturb or distort 

This developmental work is also critical for working out how to build a formal impact evaluation around the programme – without disturbing or distorting. We need to be honest about how easily and how often evaluations interfere with delivery. Randomisation, consent processes and data collection can all intrude on the careful relationship-building work that is often the first stage of practitioners’ work with young people.   

Getting this right is one of the most challenging aspects of evaluation work. It needs technical evaluation skills applied with subtlety and pragmatism. It involves understanding and respecting the programme, and holding young people’s experiences at the centre. It’s only by working together closely at the developmental stage that we can get to this point, and design an impact evaluation that is both feasible and a fair test of the programme.   

 

This work matters – and needs to be funded 

It’s a privilege to work with practitioners and young people, and to see their wisdom, insights, values and skills bring a programme to life. It’s so important that funders recognise the value of this work and invest in it. In my experience, evaluations that start with this work are stronger in terms of their relevance, do-ability and quality. And they offer much better pay-back – for the financial investment funders make, and for the trust and time that practitioners and young people invest.  

You can read more about how Youth Futures Foundation supports this type of developmental work here.  

Skip to content