A group of evaluation practitioners gathered yesterday for the launch of NPC's new report on innovations in measurement  (Dan Corry of NPC asked what the collective noun for evaluators would be. I humbly suggest, distribution).

The report is worth a look for a number of audiences, but what was interesting for the more specialist audience who attend such launches was the feeling that many of these innovations were not that new, although perhaps their usage is evolving.

An excellent panel (including Siobhan Campbell, Anne Kazimirski and Sarah Mistry) discussed the issues raised, and in my view were touching on a range of reasons for why this innovation or change was occurring. Their answers, combined with a few of my own views, suggest to me that there are four drivers for the approaches discussed in this paper being used more widely: 

  • Disruptive innovation: a typical view of innovation that is created by a new technology being applied by new organisations to old problems, and shaking up the existing market. Remote sensing and some of the approaches that sit under the umbrella of big data are included here, and are definitely outside the traditional skill set of many evaluators
  • Funder pressure/support: two case studies that were presented (Blackpool Better Start and LIFT) talked about the role of funders in enabling older ideas to be delivered. The use of data linking, for example, is not novel in evaluation, but the ability of a large programme (Better Start) to force stakeholders and their data with shared unique identifiers to the table is something that currently only a large wedge of lottery cash can achieve
  • Dissatisfaction: in different ways, the panel talked of a frustration and dissatisfaction with approaches to evaluation that prioritised a particular method. The dreaded randomised controlled trial got a couple of negative mentions, and my suspicion is that comes from a growing acceptance that its dominance within a lot of measurement discourse over the last decade has possibly done more harm than good, as it blocked out all else. I also think that there is a dissatisfaction in standard pre- and post- intervention surveys, as their ability to really understand behaviour is weak. This is encouraging the user-centred approaches to be more valued, such as prioritising feedback
  • New questions: finally, and perhaps the inverse of dissatisfaction, is the growing interest in learning within organisations as being a valid part of evaluation and measurement. Impact management and theory-based evaluations are ways to apply the test and learn approaches of things like developmental evaluation, which encourages individual organisations to own and value their learning questions. I think this has been encouraged by a combination of learning-led funders, but also initiatives such as the Centre for Youth Impact, within the youth sector.

At Renaisi we hosted a conversation last year exploring the challenges in the impact measurement space, in partnership with the Centre for Youth Impact. I think this report is highlighting some of the same problems. To start to fix this, my top three hopes are:

  1. Organisations should take more risks in trying new approaches to evaluation (and are supported to do so by their funders)
  2. Funders massively prioritise the data challenge, and use their collective weight to reduce the barriers to entry for smaller organisations to benefit from methods such as data-linkage
  3. Evaluators collectively call out practice that we know isn't good enough in a time of scarce resources. The sector itself needs to be braver in saying when an approach isn't good enough, timely enough or helping improve outcomes. If it's not doing those things, then what's the point?