Five messages for the new Australian aid performance framework: a collective view from MERL wonks
June 2, 2020
Focus Areas
Regions
Australia & Asia Pacific
The key takeaway? Regardless of topic or location (from working sub-nationally in PNG to working with Indigenous communities) or organisation (consultancy, academia or managing contractor) the same message came up over and over again: MERL is not just there to measure effectiveness but to drive better program and policy making. In other words, using MERL to measure things ex-post only tells the taxpayer what they bought. It says nothing about whether that money was well spent or not. And, by and large, the overwhelming opinion in the room is that we are doing far too much of the former (ex-post measurement) and not nearly enough of the latter (investing in real-time monitoring, learning, and user focused analysis and evaluation) in Australian aid.
So what are the implications of this discussion for the performance framework Australia is developing to sit alongside its new aid and development policy?
In our collective opinion, the single biggest message for this new performance framework is this. MERL must be understood as core to (i) managing the risk of program failure and (ii) driving better implementation of Australia’s aid and development program. MERL is more than an ‘add on’ to communicate to domestic constituents what the aid program achieved (or didn’t achieve).
So how could the new framework do this? We think there are at least five key things worth highlighting here (probably more, you tell us where we’ve got it wrong) with a few examples of where we’ve seen it done well:
1. Quarantine 5-10% (ideally 10%) of all ODA budgets for investment in strong program and portfolio level MERL systems and frameworks. The quality of DFAT’s results reporting and communication to parliament and the public at the aggregate (framework) level will only be as good as the investments made in MERL at the country and program level. This is not a new suggestion (see here and here). A 10% investment of a program budget in MERL is a far better outcome than seeing 100% of a program budget ‘lost’ because the wrong strategy was employed and change did not occur as hoped (…or in the worst case scenario, it made things worse). There are lots of excellent examples where Australian aid programs are already designing and implementing high-quality MERL frameworks – such as Investing in Women or MAMPU. These can be exemplars for other parts of the aid program.
2. Ensuring that ‘L’ is a priority within the MERL agenda, to promote adaptive management. In the context of a tightening aid budget, investments in learning and adaptation are often the first to go. They are seen as a ‘nice to have’, with whatever funds are left put towards the bare minimum (results reporting for accountability purposes). Yet global evidence shows that aid programs are more effective (and therefore spend taxpayer money better) if investments are made in applied analysis, learning and rapid-cycle evaluation during project implementation. These processes reduce the risk of program failure by ensuring that activities are able to adjust and choose the most likely path to achieving impact (and not doggedly stuck on doing something that we find out three years later wasn’t going to work). Indeed, if we accept, as Owen Barder once wrote, ‘development is what happens when an economic, social and political system improves its ability to adapt and evolve’ then learning is not just a way of managing risk and aid projects. It is actually central to the development process.
3. Continue to invest in applied, practical and user-focused research and analysis – especially given Australia’s increased use of facility modalities. In instances, such as facilities, where program designs are less prescriptive than usual it is critical to invest in analysis and applied research to understand the operating context. When there is more flexibility, funding applied to practical analysis helps Australia guard against ‘change for change’s sake’ by ensuring that implementation is informed by evidence about what’s working, what isn’t and why. This protects aid investments from unwarranted criticisms and provides the basis for informed dialogue between governments.
4. Start with the end user and promote self-reliance. Core to the White Paper and new aid policy is promoting strong and effective states that can foster growth, equity and stability in our region. Yet to do this, partners need to identify and solve their own problems. Australia cannot do it for them. MERL is one way to promote self-reliance. Policy makers, program managers and stakeholders involved in designing, implementing and reviewing/reflecting are more likely to want to engage with Australia and support its aid program. Co-design, partner-led evaluations, review and reflection processes, and investments to strengthen counterpart institutions’ own monitoring systems, such as the shared approach to MERL in the Solomon Islands Justice and Governance programs.
5. Integrate MERL into aid programming at all levels of Australian aid. This means DFAT committing to support officers overseas, implementing partners, and local partners to understand core MERL concepts and processes and be able to apply (or oversee or review) them as part of their day-to-day responsibilities. Examples like M&E House in Timor-Leste or the Learning Facility in Laos provide lessons about how to do this in a systematic way.
Using MERL only for accountability purposes in Australian aid comes with risks. Focusing only on ex-post results reporting prevents MERL from helping Australia mitigate the risk of program failure and driving better decisions about where to spend precious aid resources and why. Fortunately the solution to this dilemma does not lie in a lack of MERL methods (there are many) but in how they are valued, prioritised and resourced. The new aid policy is an excellent opportunity for Australia to draw on examples where it is already applying MERL in effective and thoughtful ways and signal that it plans to take these lessons to scale (and reinvigorate the MERL agenda) across the aid program in the new performance framework.
This blog was co-authored by Lavinia Tyrrel (Abt Global), Lucy Moore (Abt Global), David Green (Clear Horizon), Damien Sweeney (Clear Horizon), Linda Kelly (Praxis Consultants), and Chris Roche (La Trobe Institute for Human Security and Social Change).
This post was first published on the Devpolicy blog.
This blog also appears on the Governance and Development Soapbox.