Research and Development is a semi-regular feature on Democracy Speaks that highlights cutting edge, peer-reviewed research that is particularly relevant for Democracy, Human Rights, and Governance(DRG) practitioners. R&D is curated by the Research Practice at IRI’s Center for Global Impact.
For this installment, we are featuring a piece on developing bottom-up, community-based indicators of important DRG concepts like peace, conflict and security:
Firchow, Pamina, and Roger Mac Ginty. 2017. “Measuring Peace: Comparability, Commensurability, and Complementarity Using Bottom-up Indicators.” International Studies Review 19 (1):6–27. https://doi.org/10.1093/isr/vix001.
What’s the argument?
How do we know what peace looks and feels like to citizens? How would Democracy, Rights and Governance (DRG) development programs change if partners and beneficiaries contributed to the measurement of program outcomes?
How we measure concepts like peace, democracy and governance has important implications for how we design programs and policies to help achieve the outcomes we want. In Measuring Peace, Firchow and Mac Ginty showcase the Everyday Peace Indicators (EPI) project, which uses “participatory research methods…to identify indicators that local communities use to assess changes in peace and conflict in their locality.” They argue that “bottom-up” indicators like the EPI should augment traditional “top-down” indicators of peace, conflict and development.
How do they do this?
The authors compare the EPI methodology with four widely-used top-down indicators of peace, conflict and development: the Human Development Index (HDI), the Global Peace Index (GPI), the Uppsala Conflict Data Program’s Georeferenced Event Dataset (UCDP GED) and the Armed Conflict Location and Event Data Project (ACLED). The authors note several advantages of each—country-level (HDI and GPI) indexes help measure patterns of peace and conflict across countries and over time, while event data (GED and ACLED) help explore patterns at a sub-national level.
However, the authors argue that these top-down indicators tell us little about the implications of conflict events for particular communities, or how people in those communities perceive peace, conflict and security. The authors argue that “bottom-up” participatory research processes like EPI help communities communicate to researchers, policymakers and implementers what these measures of peace and conflict actually mean on the ground. In short, “This approach is driven by the premise that communities affected by war know best what peace means to them and therefore should be the primary and first source of information on peacebuilding effectiveness.”
The EPI project provides a systematic community perspective on peace and conflict by crowdsourcing indicators — representative focus groups collect long lists of indicators that citizens use every day to measure peacefulness in their own lives. For example, indicators developed in pilot projects in South Africa, Uganda, South Sudan and Zimbabwe included, among many others [all sic]:
- “Being able to go to the shops without fear of being attacked by anybody”
- “The children are in school without disruption by the rebels”
- “Army not intervening in police duties”
- “Less Fewer bad people/gangsters in the street”
To refine and rank these initial lists of indicators, community members vote to determine the most representative indicators. Finally, community members are surveyed periodically to determine if community perceptions of peace and security have changed.
How seriously should I take this?
The EPI approach is methodologically rigorous. Community driven indicators are both internally valid and precise. Because local communities generate these indicators themselves, they accurately reflect perceptions of peace and conflict within those societies. In addition, the EPI project identifies repeat indicators within and develops categories of indicators across communities. This ensures that indicators are comparable across different contexts and are not just capturing local idiosyncrasies.
In short, the EPI process can develop precise, meaningful indicators of peace and conflict within communities, and facilitate comparison between communities. Further, these indicators do an excellent job in filling a gap that arises between measuring simple project outputs and complex societal impacts. DRG implementers should seriously consider “bottom-up” indicators as a supplement to traditional “top down” indicators.
So, what now? Take homes for Programming
While traditional, “top down” indicators are likely here to stay, DRG implementers can supplement these systems with community driven indicators and other methods of inclusive project design and measurement. These methods can be time and labor intensive, but there are several options to try:
- Simple: Develop traditional “top down” indicators in consultation with local stakeholders. They may not be as responsive as those generated using the EPI methodology, but traditional quantitative indicators can be improved upon if they are validated with the people implementers hope to assist.
- Intermediate 1: Find opportunities to work with beneficiaries early in the project lifecycle, such as baseline assessments or kickoff events, to define what success looks like to them. Commit to measuring these community-driven indicators periodically.
- Intermediate 2: Use participatory monitoring and evaluation methods such as Most Significant Change and Outcome Mapping. While these systems don’t use the EPI methodology specifically, they are both predicated on empowering key stakeholders to define success.
- Advanced: Incorporate beneficiaries in data analysis processes. Just as local stakeholders often define success differently than donors and implementers, they also interpret data differently as well. Including them in the analysis process might yield an entirely different understanding of a project’s results!
Homework: For Further Reading
The push for “evidence-based” policymaking and program design in governance has led to a proliferation of statistical indicators, toolkits, and scorecards, to the detriment of marginalized communities.Top