IRI’s Jeff Lilley posts on CIPE’s Development Blog

How Do You Measure Democracy?
CIPE Development Blog
By Jeff LilleyDirector, Office of Monitoring and Evaluation (International Republican Institute)

So, how do you measure democracy work?  There is no balance sheet for political parties that records how healthy they are. We can’t give inoculations against authoritarian ruling schemes. In the absence of measuring sticks, most of us engaged in democracy and governance (D&G) work around the globe have put emphasis on anecdotal reporting and on numbers: numbers of people trained, polling and election results.

But over the past few years there has been increased attention to the need for a more rigorous assessment of the impact of D&G programs. By impact we mean the behavioral or attitudinal change connected to programming: if IRI is training parties to develop issue-oriented platforms, do more voters vote for parties based on their platforms? That’s impact.

In 2008, a U.S. Agency for International Development (USAID)-funded study entitled Improving Democracy Assistance recommended that the agency dedicate more resources to measuring impact of D&G programs using randomized studies and case studies. After focusing on numbers, it looks like Congress may also be turning its attention to more substantive evaluations of D&G programs.

IRI is testing new methods in the field as we speak. In Iraq, IRI’s team working with political parties and their candidates came up with a way to track and measure campaigning. In advance of the recent 2010 parliamentary elections, IRI’s political team in Iraq broke down its campaign training into 20 segments that it then weighted. It will then query candidates who attended campaign training to find out which segments (such as door-to-door or public speaking) they found most helpful. IRI will find out if those segments it deemed most important were, in fact, similarly valued by Iraqi candidates and what, if any, factors affected their delivery on the political field.

But even then IRI can’t say for sure that it was IRI training that produced the behavioral change. As they say in monitoring and evaluation speak, there are exogenous factors that might have affected the candidate. For example, what if the candidate attended another international nongovernmental organization’s campaign training? How can you isolate the effect of IRI?

Enter impact evaluation. It’s a form of evaluation that assesses the change that can be attributed to a particular activity or intervention. That means it isolates IRI from other exogenous factors. Ideally, it does that by comparing randomly selected groups – one that receives the intervention and one that does not. The use of control groups gives impact evaluation its distinguishing characteristic — the use of a counterfactual: “what would the situation have been if the intervention had not taken place?”  

If you think that sounds like medical trials, you are right. The thing about D&G work is that the programming doesn’t always lend itself to randomization. Imagine telling a political party in x country that IRI can’t train them because they were selected to be the control group.

In Colombia, however, IRI identified  a good governance program that meets the criteria for an impact evaluation. It has a large enough sample size to detect impact, and it lends itself to random selection of participants. After securing funding from USAID for this impact evaluation initiative, IRI undertook a competitive bidding process to select an impact evaluator (a key criterion is that the evaluation be done independently). IRI staff then worked closely with the evaluator to design the evaluation of the impact on citizens’ perceptions of a new Office of Transparency in Cucuta that will facilitate public access to city government documents and a One-Stop Shop in Cartagena that will provide multiple municipal services to citizens in one place. Baseline surveys, to which final results will be compared, began in the two cities on March 1.

So, what results is IRI looking for? In the case of these two services, it’s increased citizens’ confidence in democracy as a system of government.  That will be determined by using surveys to measure what we are calling the feel good effect and the good service effect in each city.  The FG effect will measure the extent to which citizens’ confidence in democracy has increased as a result of the establishment of each office, whether they have used the new facilities or not. Since it may be difficult to detect with confidence the impact of one intervention over one year in a city with hundreds of thousands of residents, the good service effect will measure the extent to which citizens who actually use the services experience an increase in confidence in democracy.

Thus far, IRI has learned much from this ground-breaking endeavor in to isolate the effect its democracy assistance programming: You need to identify the right kind of program; it’s expensive and highly technical; and you must plan far in advance.  But the real learning is ahead. The Office of Transparency and the One-Stop Shop are just getting underway. They will function for a year or so before we measure their effects. Stay tuned.

This post is part of a series of guest posts by the International Republican Institute (IRI).

 

Up ArrowTop