Whose Extension Counts?

Curator’s note: Well, it has been a very long time with no posts here. To remedy that situation, we’re getting things going again by cross-posting a post that appeared in the aea365 blog of the American Evaluation Association, written by yours truly. Stay tuned for more blog posts on the theme of ‘credible evidence in Extension,’ which is also the topic of a forthcoming set of papers being edited by Ben Silliman (North Carolina State Extension) and Scott Cummings (Texas A&M AgriLife Extension Service).

————————————————————————-

Whose Extension Counts?

Debates about what counts as credible evidence in program evaluation and applied social science research have been ongoing for at least 20 years. Those debates are summarized well in a very helpful book on this very question edited by Stewart Donaldson, Tina Christie, and Mel Mark. In particular, the book provides a balance of viewpoints from both proponents and detractors of the position that experimental approaches are the “gold standard,” the best route to credible evidence.

Even long before that (hundreds of years before), questions of how to generate valid knowledge of the world around us—and specifically the role of experimentation in that process—animated the scientific and aristocratic classes alike. In Leviathan and the Air Pump, Simon Schaffer and Steven Shapin examined the debate between Robert Boyle and Thomas Hobbes over Boyle’s air-pump experiments in the 1660s, exploring acceptable methods of knowledge production and the societal factors related to different knowledge systems.

The point of this post is this: These seemingly esoteric methodological debates about credible evidence are in fact fundamentally important political questions about life. This point is summed up by Bill Trochim and Michael Scriven, who said, respectively:

“The gold standard debate is one of the most important controversies in contemporary evaluation and applied social sciences. It’s at the heart of how we go about trying to understand the world around us. It is integrally related to what we think science is and how it relates to practice. There is a lot at stake.” (W. Trochim, unpublished speech transcript, September 10, 2007)

“This issue is not a mere academic dispute, and should be treated as one involving the welfare of very many people, not just the egos of a few.” (Scriven, 2008, p. 24)

In other words, epistemological politics (the ways in which power and privilege position some ways of knowing as ‘better’ and hierarchically ‘above’ other ways of knowing) are inextricably linked with ontological politics (whose reality counts, and how some reals are made to be more or less real, in practice, through various tacit or explicit power plays).

In the context of Cooperative Extension, and more specifically in the search for credible evidence about Extension, this nexus of epistemological and ontological politics raises the questions:

What is Extension?

For some (according to my research described here), it is a vehicle for the dissemination of scientific information. For others, it is a site for grassroots knowledge haring and deliberative democracy.

And, given that there appear to be (at least) a plurality of metanarratives about what Extension is, or (perhaps) an actual plurality of Extensions, the question then follows (playing on Robert 

Chambers’ influential title, Whose Reality Counts):

Whose Extension counts?

Webinar with Jean King: “Evaluation Capacity Building Through the Years”

jean-king-webinar_smallFREE WEBINAR on ECB!
The Organizational Learning and Evaluation Capacity Building (Ol-ECB) Topical Interest Group (TIG) of the American Evaluation Association (AEA) is pleased to invite you to the first in a series of ECB webinars.

Jean King is a thought leader in ECB and has also contributed greatly to evaluation in Extension in her long and fruitful career at the University of Minnesota. In the webinar, she will reflect on her observations on the developments in ECB through the years. Don’t miss it!

Wednesday, November 9 from 1:00-2:00pm Eastern via WebEx. 

Click here to register.

Participatory Data Analysis

By Corey Newhouse (Public Profit) and Kylie Hutchinson (Community Solutions)

Earlier this year we held our first webinar on Participatory Data Analysis for Evaluators. In the field of evaluation, which is growing by leaps and bounds and continually innovating, there’s surprisingly little written about this area. Also known as data parties, sense-making sessions, results-briefings, and data-driven reviews, participatory data analysis plays an important role in promoting evaluation use. In this post we’ll describe briefly what participatory data analysis is, and several reasons why you should consider it seriously for your practice.

What is it?ppt data analysis (1)

Participatory data analysis can take many forms, but essentially it’s an opportunity for you the evaluator to consult with key stakeholders regarding the preliminary data and analyses. It’s an opportunity to see how stakeholders understand and interpret the data collected by the evaluation, and possibly an opportunity to learn important additional contextual information.

Why is it helpful?

  1. People support what they helped create.

This quote by Richard Beckhard[1] says it all. When stakeholders play an active role in interpreting the findings, we believe they are more likely to develop ownership of the evaluation and implement the recommendations later on. A 2009 survey[2] by Dreolin Fleischer and Tina Christie of Claremont University found that 86% of American Evaluation Association members believed that the involvement of stakeholders in the evaluation process was an influential or extremely influential factor of greater utilization. Who can say no to that?

  1. Every evaluator needs a reality check.

Participatory data analysis not only ensures that, as evaluators, we arrive at the correct conclusions, but also that our recommendations hit the mark. We’re (usually) not program staff and lack the in-depth day-to-day familiarity with a program that our evaluands have. We need their input to indicate which findings are the most meaningful and to suggest recommendations we might never have thought of on our own.  Key stakeholders can suggest appropriate wording for these recommendations and in the process we can ensure there is consensus on the conclusions.

  1. Assure the evaluation will reach key stakeholders

Data parties are also a great opportunity to get stakeholder input on which forms of reporting are best for which stakeholders. They can tell us not only who should get the report and by when (to meet key decision-making cycles), but also who actually has the power to act. In this fast-paced, mobile age, evaluators need as much help figuring out how to reach their target audience as they can.

  1. Look Ma, I’m capacity-building!

A wonderful thing happens during participatory data analysis. At some point along the way, we’re building evaluation capacity in a hands-on and directly relevant way for our stakeholders.

  1. Avoid “gotcha” surprises for your client

Sometimes evaluations surface less-than-great findings for the client. Data parties are a good opportunity to share negative findings early, rather than saving the bad news for the final report – which can feel like a set-up for your client.

We have found data parties to be a great way to engage our clients in the sense-making process, which in turn yields more actionable recommendations and builds clients’ support for the evaluation as a whole. Data parties can be large affairs, with lots of people spending hours and hours pouring over results. They can also be briefer, smaller sessions with just a few stakeholders and a few data points. The most important thing is to get the (data) party started!

Not sure where to begin? Check out Public Profit’s free guide, Dabbling in the Data. It has step-by-step instructions for 15 team-based data analysis activities that are just right for your next data party. Or download Kylie’s one-page cheat sheet on Data Parties. Party on!

 

[1] Richard Beckhard (1969). Organization development: strategies and models. Reading, Mass.: Addison-Wesley. p. 114.

[2] Fleischer, D.N., & Christie, C.A. (2009). Evaluation use: Results from a survey of U.S. American Evaluation Association members. American Journal of Evaluation, 30(2): 158-175.