Monitoring, Evaluation and Learning (MEL) for Complex Programs in Complex Contexts: Three Facility Case Studies (Part One)
The challenges of MEL in Facilities
Most development aid projects seek to tackle complex development problems in complex political and institutional contexts. Their objectives will include supporting positive change, and change necessarily means a renegotiation of power and resources. It means understanding the interests, motivations and incentives of those with a stake in keeping—or changing—the status quo. These interests are often hidden to outsiders and hard to predict until people act or power and agency are exercised.
While there is a growing body of evidence about how to design and implement programs that respond to this complexity (see here), little has been written on best practices for monitoring, evaluating and learning (MEL) such programs. Further, where evidence does exist, it focuses on single sector projects—not on MEL for aid portfolios, which often use a range of modalities that target a variety of development problems.
This knowledge gap has specific implications for high-value, multi-sector ‘Facilities,’ the portfolio of amalgamated investments—usually included for cost effectiveness, delivery efficiency and, sometimes, development contribution—that Abt Global manages on behalf of the Australian Department of Foreign Affairs and Trade (DFAT). Specifically: how do we judge the overall performance of each Facility? Is it possible to meaningfully aggregate results arising from such varied portfolios? Just how much ‘contribution’ to a high-level goal is required to show the impact of development work (beyond the output level) and how can such a convincing argument be constructed? Do we possess the skills to monitor progress in real-time and adapt our programs accordingly? And most importantly, to what extent does the use of a project’s framework hinder or help MEL for Facility-wide performance?
Given Abt is managing three such Facilities—KOMPAK Indonesia, the Governance Partnership in Papua New Guinea, and the Partnership for Human Development in Timor-Leste—we researched these questions, and, in this two-part blog, we summarize our findings. This blog details our approach and motivation for conducting the research; for example, the challenges of doing MEL in facilities, and the implications of this for the aid industry. Part two will discuss the findings.
Our research used a case-study approach, interviewing key MEL staff from each of the three facilities listed above. Our key take-away? Traditional forms of monitoring and evaluation—where the primary focus is on accountability, ex-post learning and evaluation and linear change—do not lend themselves well to the Facility model.
This stems from one simple fact: conventional forms of MEL are based on a largely linear project model, i.e. one which is effective in simple change contexts, where there is a clear line of sight among activities, inputs, outputs and outcomes. In complex portfolios working in complex political contexts—where institutional or behavioural change is the underlying goal—this model is less effective. In addition, because the justification for the Facility model includes cost effectiveness and delivery (e.g., the PNG Investment Design), MEL frameworks must also be able to explain achievements at the portfolio level as well as the individual project level, showing that the whole is greater than the sum of the parts.
In each case examined, our research found that—because so little has been documented in the international literature on MEL for facilities in donor practice—teams needed to develop their own approaches to try to overcome these challenges.
Part two of this blog explores the seven areas where lessons have emerged.