AEA2014: Evaluation Ideas that are Ready to Retire

By Teresa McCoyAssistant Director of Evaluation and Assessment, University of Maryland Extension

Did you ever have a pair of old shoes or jeans that you just could not bear to part with—no matter how tattered or worn out? I know that I have and bet you have, too. At the 2014 American Evaluation Association annual conference in Denver, I attended a session given by Michael Quinn Patton entitled, “Lack-of-vision evaluation ideas that should be retired to realize visionary evaluation for a sustainable, equitable future” where he presented his suggestions for evaluation ideas that we should discard (like those old shoes or jeans).

If you have heard Michael speak, you know he generates thought-provoking engagement with his audience and this session title certainly sparked my interest. I wasn’t disappointed. In his new edition of Qualitative Research Evaluation Methods: Integrating Theory and Practice (4th edition) (due out in November and coming in at length of over 800 pages), he discusses the 10 outdated evaluation ideas and/or approaches that he thinks should be retired, including such classics as anecdotal, gold standard, best practices, and site visits.  You will have to buy the book to see all of his explanations about the practices that are outdated.

After presenting his ideas, Michael asked the audience for further nominations. There were lots of suggestions! I had my own: cost-benefit. I think that is a term borrowed from the business world that does not translate to the not-for-profit or government sectors. For example, how I do calculate the costs of not knowing how to prepare healthy meals for children versus the benefits? Is there a way we can calculate the costs of a bridge, such as the Chesapeake Bay Bridge in Maryland, versus the benefits? We know the costs to build the Bay Bridge and to maintain it each year, but what about the benefits? When I drive over the Bay Bridge, I am amazed at the beauty of the Chesapeake Bay. I benefit by being able to see my Extension colleagues on the Shore within one-to-two hours. However, what is that benefit actually worth?

With costs and benefits, the question has to be raised about costs and benefits to whom? The State of Maryland benefits from the tolls I pay each time I cross the bridge. Yet, the State of Maryland incurs a cost in highway and bridge maintenance, air pollution, and disruptions to the ecology of the Bay because I drive across to the Eastern Shore.

Another nomination from an evaluation expert in the room was the idea of logic models. This person suggested we move away from the term logic model to that of program map.  I agree. The best way to clear a room of Extension people is to say, “I’m here from the Evaluation Department to teach you about logic models.” I often start trainings with that line and always get a laugh. In my practice, I have moved away from teaching logic models to teaching program theory and program maps. I advise people to forget about the logic model form and use the tool that works best for them to figure out what outcomes their program is designed to accomplish. I showed an image of a tree (roots, trunk, limbs, leaves) logic model to a group and a woman said, “OK, I understand that now. I wish someone would have told me this earlier.”

I would like to hear from other Extension evaluators what old ideas and approaches you think we should leave behind like those old shoes and jeans. Perhaps this discussion could help us move our practices and our profession ahead in the next few years.

4 Replies to “AEA2014: Evaluation Ideas that are Ready to Retire”

  1. Great question Teresa! I love logic models but can appreciate that it is a rare logic model that can truly capture the complexity of many Extension programs. I’ve taken to talking to folks about articulating a theory of change for their program. A theory of change can be represented in many different ways, graphically, in writing, etc… but it requires us to be clear about what sorts of change we are seeking in our program participants (behavior, knowledge, skills) and how we think that change will happen. A logic model is one way to articulate a theory of change but there are many other ways such as your tree example. We have made a lot of progress in the last decade helping educators make the connections between program goals/outcomes and program design using logic models so I’d love to hear what others think about retiring logic models.

  2. I became acquainted with logic modeling in the late 1980’s through evaluability assessment. We worked with a team of program implementers (primarily agents and specialists) over several days. We would start by developing a matrix of educational effects (i.e., knowledge, attitude, skill, and practice change) for various target audiences. We used lots of flip chart paper on the wall. After we completed the matrix, typically we would build out, to the right, the long-term effects and goals. Then we would ask the question, “What needs to happen for these educational effects to occur?” This would be the major program piece that would precede the educational effects. And we would repeat that, working back to the left, until the model was complete. We never mentioned inputs, outputs, or outcomes, but they were there. Repeatedly, the groups I worked with said that this was the most time they had ever spent talking about programming. And even though we ended up with a logic model, it became clear that the product was the process (and not the model). In this way, logic modeling was exciting and dynamic. I think the Wisconsin materials on logic modeling are terrific. And I also think that their template made logic modeling too easy for one person to fill-out, missing the real value of the process.

  3. I agree with you Teresa that the cost-benefit/ROI evaluation frameworks so often seem forced, narrow, and artificial, especially in the non-profit context. Good riddance!
    As for logic models, like you and Sarah, I’d much prefer to see folks work (and work more flexibly) with program maps/theories of change. I do think, though, that logic models can continue to serve a place in program evaluation by providing a simple framework to begin the articulation process, especially within dynamic and complex systems. I think of logic models in the same way I think about open coding of qualitative data: starting with the obvious to get to the more complex. Logic models can be a really beneficial first step in getting the evaluation “lay of the land.” The trick is to then convince folks to take the evaluation skeleton logic models can provide, and move forward into more complex territory by asking questions like: What’s missing? What are some qualities/inputs/outcomes/challenges/etc. that aren’t a part of this logic model simply because they don’t fit into this neat package? What would look different if I focused on guiding principles instead of outcomes? I suppose the other bugger is convincing folks that doing the work required to assemble a logic model as essentially a worksheet would be a tough sell. But I do think we can continue to leverage logic models (and the work that goes into them) into much more comprehensive evaluation work.

  4. I don’t think I necessarily share the same passion for logic models that Sarah does, but like Mike I see where they can be a useful tool, particularly in the process that one needs to go through to do them well. I was introduced to logic models after having served in the Army, and saw the similarities with the reverse-planning process and five part operations order.

    My fear is that many people will see the Logic Model as some sort of mold that they need to shape their problem into, and once completed is not revisited often enough to test our assumptions and adjust to the complexities of the real world. We make some really big assumptions in our models, even if they’re research based, in charting a predetermined course between outcomes at the various levels. If A, then B, then C contains implicit assumptions that may not be true – to what degree are the evaluations testing our assumptions versus testing the specific landmarks we presume will lead to something else?

    So, as a product, logic models may give us a false sense of confidence that we can achieve a particular outcome and can facilitate stagnant programming that doesn’t respond to or reflect our complex world. As a process, the logic model can be a useful tool to help us think about why we’re choosing to do what we do and what our intentions are – but this process needs to be continuously adjusted to reflect what emerges and what we learn in our work. If we let the logic model be a paper template, like so many performance evaluation forms, we’re lulled into thinking we’ve accomplished something by simply filling it out and connecting a bunch of dots.

Comments are closed.