Make It Count: Evaluating Population, Health, and Environment Programming
Evaluation is the lifeblood of any development effort – it’s how implementers know if they’re making a difference, determine what to do more or less of, and enables funders to evaluate cost-effectiveness. But it’s also an inexact science, no more so than when it comes to complex interventions that cut across sectors.
Speaking at the Wilson Center on March 21, John Pielemeier, who has worked on program evaluation for USAID and others for three decades, described the challenge of evaluation for population, health, and environment (PHE) programs.
PHE programs typically combine interventions for family planning and conservation, and sometimes include major components on women’s empowerment, food security, water and sanitation, climate change, and poverty alleviation – depending on the context and community needs. The complexity and relative scarcity of PHE programs make evaluation an important tool for refining the approach and garnering more donor support, he said. “Is there something in this integrated approach that is better than what you would do in a sector specific approach to either family planning or the environment and health?”
Pielemeier was joined by Dr. Vik Mohan, medical director of Blue Ventures, and Roger-Mark De Souza, director of population, environmental security, and resilience at the Wilson Center, to discuss the current challenges and opportunities of evaluation for PHE.
The PHE Methodology
Mohan said Blue Ventures has made several efforts to measure the impact of their PHE program in southwest Madagascar. The villages they work in are remote and highly dependent on local marine resources. Blue Ventures’ interventions include a community aquaculture program, coordinated marine reserves, disbursement of voluntary family planning services, training for women to become community health workers, and other basic health services.
Integrating interventions from such a range of sectors can create both positive and negative feedback loops, Mohan explained, which sometimes leads to unexpected outcomes. For example, when Blue Ventures first introduced condoms to one village, fishermen starting using them as waterproof covers for their flashlights, allowing them to continue fishing at night and further threatening fish stocks.
In this kind of complex, open system, the “conventional scientific study design, is not going to give us all of the answers,” Mohan said. These villages are “a system with multiple interacting parts…where you cannot predict the behavior of the whole from a reductionist looking at the individual components.” He said Blue Ventures staff have a range of ideas and untested hypotheses about how the various interventions interact to produce outcomes. “If what goes on in this complex system is really that complex and we can’t predict it, I have to put my hand up and say ‘I don’t know how our program works.’”
Determining How and Why
Due to the relative youth of the PHE field, peer-reviewed studies on the approach are rare, and the best ways to evaluate programs are still being developed. The first PHE programs appeared in the 1990s, said De Souza, building on the ambitious goals and limited success of what were previously called “integrated conservation and development projects.” PHE distinguished itself as a separate method by focusing on the community scale, he said. Under these circumstances, effective evaluation is not only a tool of measuring impact, but is also critical for continued growth of the approach.
At Blue Ventures, Mohan said they are developing a “realist evaluation model.” First, they are working to come to an internal consensus about how their program works. After a theory has been developed, they plan to test it using data from surveys, interviews, and case histories to determine what variables can serve as indicators of the theory in action. Finally, they plan to track those variables while interventions are underway to iterate on their program as they implement it.
In developing this model, Blue Ventures has grappled with a number of questions that any development evaluation effort must face. For example, how do you ensure that someone you interview isn’t just telling you want they think you want to hear? “The honest answer is, you can’t eliminate bias,” said Mohan, “it’s going to happen – an investigator going into that system is changing that system.” But, he explained, Blue Ventures is working to be mindful of that bias by adding observers to some of their interviews to see how the interaction between interviewers and interviewees might influence responses.
Another challenge to effective evaluation is the amount of time it takes for programs to bear fruit. PHE in general requires a long time horizon, said Pielemeier. In fishing communities, such as the ones Blue Ventures works in, “people tell me they see results in a year or two.” With ecosystems like forests, however, “the results are much slower,” with some projects taking 10 to 15 years to show results.
The Donor Effect
The funder relationship is another variable – maybe the most important variable – affecting how development programs do evaluation. Pielemeier noted the value of connecting with decision-makers in NGOs and supporting government agencies who are invested in program evaluation. These advocates are crucial for understanding how the funding organization will use results and by extension how evaluation should be structured to suit their needs.
He noted a growing trend of interest in integrated programs on the part of health and environmental organizations, which bodes well for PHE. In order to effectively engage these potential partners and donors, he recommended PHE evaluations consider indicators that those constituents value in their own work. For example, he noted his environmental colleagues were most concerned with indicators of ecological change, while those who worked in health focused on rates of unintended pregnancies and child mortality.
Funding organizations are nearly universally interested in what can be quantified, Pielemeier said:
In terms of qualitative versus quantitative, I think in general there is a push for more quantitative evaluative information and more focus on quantitative indicators, which I think is really hard for the development world to deal with…because it takes a long time for things to change, and they don’t change in the same way from village to village, community to community, and country to country.
It is possible to use qualitative measures with donors, said Mohan, but it requires a close relationship with the donor, one where they are engaged in the process.
Politicians, on the other hand, are often more swayed by qualitative indicators, such as case studies, than program managers in NGOs and government, said Pielemeier. Sharing an example of a PHE program he worked on in the Philippines where the support of municipal officials was instrumental, he said the “mayors who were talking to each other weren’t giving numbers to the other mayors, they were talking about how things were changing in their communities from their perspective as a mayor and in language that other mayors would understand.”
Casting Light on the “Invisible Changes”
Regardless of the relationship between donor and implementer, Pielemeier emphasized one of the primary goals of evaluation in PHE needs to be highlighting the unique added value of integration, the “invisible changes.”
Pielemeier recalled the effects of PHE projects he saw in the Philippines: Men and women were taught together about the environmental impacts of big families, giving them the opportunity to learn from each other; men learned about family planning and women learned about sustainable livelihoods in a way that probably would not have occurred in a more narrowly focused intervention. The challenge of PHE evaluation is showing these changes on paper.
The next generation of PHE programs, including Blue Ventures, are building on the tools that already exist and developing new evaluation methods to tackle this challenge, said Mohan. There are still many questions to be answered about methodology and indicators, but he expressed confidence: “Taking the lid off the black box and understanding how PHE works offers a really exciting opportunity to communicate the exciting work that we do.”
Drafted by Benjamin Dills, edited by Schuyler Null