Thursday March 11th 2010

Why Evaluation Confuses People


I usually begin my logic model workshops with a quick poll of where participants are on what I call the “yuck to yippie” continuum. For most people, the mere mention of the words “logic model” or “outcomes” or even “evaluation” elicits a gut reaction that falls somewhere on that continuum. (For the record, I have found that very few people occupy the “yippie” side of the continuum. Most are somewhere between “yuck” and indifference). Usually, the negative reactions to evaluation stem from a fundamental misunderstanding of what should be expected from small-scale evaluations of single-site programs (which is where most of us operate).

The most common mistake made by nonprofits (and the foundations that support them) is the expectation that a nonprofit should conduct evaluation research that proves that their program or service actually produced the desired tangible outcomes. This sounds reasonable, doesn’t it? After all, if you receive money to offer a program or service it is not unreasonable to expect some evidence that the program is doing what it is supposed to do.

The fundamental flaw in this line of thinking is that it puts the em-PHAS-is on the wrong syl-LAB-le. Let me illustrate with a real example taken from a program that proivdes bereavement support for young people:

“Research has shown that unresolved grief can play a significant role in poor school performance, truancy, alcohol and substance abuse, depression, anxiety, an increased risk of suicide, and/or the ability to form significant relationships.”

If you were asked to evaluate the success of this program, what would you measure? Given the tendency to assume that evaluation means conducting research to prove that outcomes were attained, many people would want to know whether the youth who had gone through the program actually were less likely to show poor school performance, truancy, etc. But don’t we know already know (i.e., hasn’t it already been proven) that resolving grief decreases the likelihood that these negative outcomes will occur? Isn’t that why we have chosen to offer this type of intervention in the first place?

Given this, the key evaluation questions are these: 1) what does resolved grief look like? and 2) how well are we doing it? In other words, the research tells us what matters and what works. Our evaluation emphasis should be on the correct implementation of the intervention — reaching the right people, at the right time, with the right type and level of support.

What this requires is that we abandon the expectation that become better producers of evaluation research and instead focus on becoming better consumers of the evaluation research that already exists.

It is much less confusing this way.