Webcast Recap

"There is a knowledge gap between global targets and locally owned goals," said Sallie Craig Huber, global lead for results management at Management Sciences for Health(MSH). The seventh meeting of the Advancing Policy Dialogue on Maternal Health series–cosponsored by the Global Health Council, MSH, and PATH–comes at a critical time as world leaders meet next week at the UN Summit to review progress toward the Millennium Development Goals (MDGs).

Panelists Marge Koblinsky, senior technical advisor, John Snow Inc., Ellen Starbird, deputy director, U.S. Agency for International Development (USAID), and Monique Widyono, program officer of PATH, discussed strategies for improving maternal health evaluation methods while balancing the interests of donors and beneficiaries.

Maternal Health Indicators: Contact vs. Context

"Skilled birth attendants [have] become the strategy [for improving maternal mortality rates], but one size does not fit all," said Koblinsky. The proportion of births attended by skilled birth attendants is a key maternal health indicator; however, it is not sufficient and says little about what the attendants actually did during the birth.

Citing research conducted by the World Health Organization and others, Koblinsky demonstrated how other indicators such as near-miss morbidity, rates of cesarean section, and contraceptive prevalence rates (CPR) are better aligned with maternal mortality outcomes. "CPR is much more closely linked with the outcome we desire as [contraception] reduces pregnancies for those at higher risk and reduces unwanted births and unsafe abortions," said Koblinsky.

"Are the present benchmarks enough?" asked Koblinsky. "The answer is no….Indicators based on contact with skilled birth attendants focuses attention on contact, not on the quality of care or event context."

Monitoring and evaluation (M&E) should generate results that show connections at all levels of the health system and drive progress. "Maternal health strategies need to differ based on context, infrastructure, and life-saving interventions. We need indicators of context, systems capacity, referral networks, and transportation," concluded Koblinsky.

Qualitative Data Is Necessary

"When we talk about monitoring and evaluation, transparency and accountability, it's really critical to engage [in a discussion] on how we gauge progress," said Widyono. In the field, "collection of data varies widely and depends on the capacity of those collecting, aggregating, and analyzing the information," said Widyono. Such inconsistencies demand increased investment in local research capacity and qualitative analysis.

"There is a lack of attention paid to developing local, sustainable research capacity," said Widyono. "We have an obligation to build local research capacity and disseminate findings in collaboration with the people who are going to be affected by this data," she said.

Such engagement also provides an opportunity for feedback. This "qualitative data helps to reinforce, illuminate, and deepen the understanding of what this quantitative data is showing on the ground," said Widyono. Moving forward, policymakers, donors, and program managers will need to find a balance between these two sets of data and work together to galvanize action.

Innovation and Research

"We really need to think about monitoring and evaluation and research and innovation as a continuum," said Starbird. "They reinforce each other and play different roles in helping us understand what makes programs work or why they are not working." She noted that the Obama administration's Global Health Initiative will work with local stakeholders to build country ownership of M&E systems and harmonize indicators.

"We have a myriad of indicators that we expect people to monitor, collect data for, and report back to headquarters in a way that has not given countries and programs the freedom to be country-specific," said Starbird. Therefore, "one of the goals is to minimize the reporting burden and better coordinate around indicator definition with other donors," she said.

In order to strengthen M&E for maternal health, Starbird called for new indicators as well as new ways of thinking about data analysis. "Having a results framework is really important to do good monitoring and evaluation," she said. Evaluating the relationship between inputs, outputs, outcomes, and impacts requires a wide range of data resources so we can "get under the numbers" and determine what needs to be improved, she said.

"It's really important to have realistic goals, otherwise it's difficult to put programs into place and get where we want to go," said Starbird. She said that MDG 5.B, which calls for universal access to reproductive health, "is great, but there's never going to be universal access to reproductive health. If we really want to make progress we need to define something that is achievable and is something we can come together around."

In conclusion, it is necessary to provide "countries with the room to do what needs to be done locally, so we can better understand these concepts rather than imposing indicators on everybody," said Starbird.

Drafted by Calyn Ostrowski.


  • Sallie Craig Huber

    Global Lead for Results Management, Management Sciences for Health
  • Ellen H. Starbird

    Director, USAID Office of Population and Reproductive Health
  • Monique Widyono

    Gender Advisor for Gender Based Violence Prevention and Response, Technical Leadership and Research Division, Office of HIV/AIDS, U.S. Agency for International Development