Participatory Data Analysis

By Corey Newhouse (Public Profit) and Kylie Hutchinson (Community Solutions)

Earlier this year we held our first webinar on Participatory Data Analysis for Evaluators. In the field of evaluation, which is growing by leaps and bounds and continually innovating, there’s surprisingly little written about this area. Also known as data parties, sense-making sessions, results-briefings, and data-driven reviews, participatory data analysis plays an important role in promoting evaluation use. In this post we’ll describe briefly what participatory data analysis is, and several reasons why you should consider it seriously for your practice.

What is it?ppt data analysis (1)

Participatory data analysis can take many forms, but essentially it’s an opportunity for you the evaluator to consult with key stakeholders regarding the preliminary data and analyses. It’s an opportunity to see how stakeholders understand and interpret the data collected by the evaluation, and possibly an opportunity to learn important additional contextual information.

Why is it helpful?

  1. People support what they helped create.

This quote by Richard Beckhard[1] says it all. When stakeholders play an active role in interpreting the findings, we believe they are more likely to develop ownership of the evaluation and implement the recommendations later on. A 2009 survey[2] by Dreolin Fleischer and Tina Christie of Claremont University found that 86% of American Evaluation Association members believed that the involvement of stakeholders in the evaluation process was an influential or extremely influential factor of greater utilization. Who can say no to that?

  1. Every evaluator needs a reality check.

Participatory data analysis not only ensures that, as evaluators, we arrive at the correct conclusions, but also that our recommendations hit the mark. We’re (usually) not program staff and lack the in-depth day-to-day familiarity with a program that our evaluands have. We need their input to indicate which findings are the most meaningful and to suggest recommendations we might never have thought of on our own.  Key stakeholders can suggest appropriate wording for these recommendations and in the process we can ensure there is consensus on the conclusions.

  1. Assure the evaluation will reach key stakeholders

Data parties are also a great opportunity to get stakeholder input on which forms of reporting are best for which stakeholders. They can tell us not only who should get the report and by when (to meet key decision-making cycles), but also who actually has the power to act. In this fast-paced, mobile age, evaluators need as much help figuring out how to reach their target audience as they can.

  1. Look Ma, I’m capacity-building!

A wonderful thing happens during participatory data analysis. At some point along the way, we’re building evaluation capacity in a hands-on and directly relevant way for our stakeholders.

  1. Avoid “gotcha” surprises for your client

Sometimes evaluations surface less-than-great findings for the client. Data parties are a good opportunity to share negative findings early, rather than saving the bad news for the final report – which can feel like a set-up for your client.

We have found data parties to be a great way to engage our clients in the sense-making process, which in turn yields more actionable recommendations and builds clients’ support for the evaluation as a whole. Data parties can be large affairs, with lots of people spending hours and hours pouring over results. They can also be briefer, smaller sessions with just a few stakeholders and a few data points. The most important thing is to get the (data) party started!

Not sure where to begin? Check out Public Profit’s free guide, Dabbling in the Data. It has step-by-step instructions for 15 team-based data analysis activities that are just right for your next data party. Or download Kylie’s one-page cheat sheet on Data Parties. Party on!

 

[1] Richard Beckhard (1969). Organization development: strategies and models. Reading, Mass.: Addison-Wesley. p. 114.

[2] Fleischer, D.N., & Christie, C.A. (2009). Evaluation use: Results from a survey of U.S. American Evaluation Association members. American Journal of Evaluation, 30(2): 158-175.

 

Worthy and Effective Public Value Narratives

By Scott Chazdon, University of Minnesota Extension

In 2014, my evaluation colleagues and I began gathering stories about the impact Extension programs have on individuals and communities. Based initially on the Most Significant Change method (Dart & Davies, 2003), the project aimed to promote ongoing dialogue about Extension programming and help staff and stakeholders explore the changes that occur because of Extension programming.

Methodological Underpinnings

The Most Significant Change methodology is a participatory, story-based approach to evaluate Extension’s impact on participants and the public. We piloted the project across programming in the central region of Minnesota—15 counties that include the Twin Cities metropolitan area and surrounding suburban and rural counties.

The project uses a dialogical process; each submitted story is reviewed against a rubric. The project drew on Brinkerhoff’s Success Case Method (2002) to show evidence of program impact through rich and verifiable descriptions. This method does not replace other evaluation efforts, but stories can be powerful communication tools, especially when combined with other evaluation methods.

The Rubric

As a result of the central region project, we strengthened the rubric to ensure that public value is deeply engrained in impact narratives.  The rubric is based on our learning from the project as well as my own work on Evaluation for Public Value (Chazdon & Paine, 2014).

The elements of the new rubric are:

  1. Story demonstrates behavior changes that resulted from Extension programming. A strong narrative must incorporate evidence that the program has achieved its intended behavioral outcomes.
  2. Story demonstrates the trust and respect Extension has established with key audiences. Extension has built trust through long-standing relationships with key stakeholders. These aspects of programs are often overlooked and need to be incorporated into impact narratives.
  1. Story demonstrates Extension programs, staff, and volunteers meeting the needs of underrepresented populations. Part of the public value of a program is determining the audiences that most need the programming. These should be audiences that cannot otherwise receive the content through private sources.
  1. Story demonstrates Extension adapting to meet the changing needs of its key audiences. Public value also resides in staying current with traditional Extension audiences (farmers, youth, and conservation professionals) by addressing changing needs. This can include changing content due to new economic, environmental, or political contexts.
  2. Story demonstrates ways that Extension leverages organizations or partnerships to expand the delivery of research and education beyond initial program participants. Public value resides in the way Extension leverages its partnerships and collaborations to reach beyond its direct participants.
  1. Story demonstrates ways that Extension programming led to positive social, economic, environmental, cultural, health, or civic effects for public-serving organizations or communities. Public value resides in the “so what?”—the positive things that happen in families, organizations, and communities that can be attributed at least, in part, to Extension education. It is challenging to quantify these types of impacts, but systematic qualitative methods, such as Ripple Effects Mapping, can be very useful to document these effects.

These six aspects of public value are easy to teach and provide a useful framework for thinking about the public value of Extension education.

Moving Forward

As evaluators move into public value narratives, we must tread carefully with the communications staff in our organizations. Typically, they write impact stories, and they do it well! But they may not employ evaluative frameworks, such as this rubric, in doing so. To distinguish our work, we describe it as public value “narratives” rather than “stories.”

Moving forward, we continue to develop tools and training resources to support the writing of impact narratives. We have developed the following quick guide to composing a narrative:

  1. What was the presenting issue?
  2. Who was the target audience, and why?
  3. Why Extension? Credible information, research-based, trusted resource?
  4. What changes in behavior or action occurred as a result of the program? Include evaluation evidence.
  5. What were the broader impacts? Evidence of spillover, leveraging, ripples, return on investment, benefit-cost analysis?

We are also working to train new Extension educators in this narrative writing process. We hope to have a public value narrative contest as part of our annual professional development conference and use narratives for our reporting to the national land grant impacts database.

I am happy to share more information on our process and can be reached at schazdon@umn.edu.

Key References:

Brinkerhoff, R. O. (2002). The success case method. San Francisco: Berrett-Koehler.

Chazdon, S.A. & Paine, N. (2014). Evaluating for Public Value: Clarifying the Relationship Between Public Value and Program Evaluation.  Journal of Human Sciences and Extension, 2(2), 100-119. Retrieved from http://media.wix.com/ugd/c8fe6e_8b2458db408640e580cfbeb5f8c339ca.pdf.

Dart, J. & Davies, R. (2003). A dialogical, story-based evaluation tool: the Most Significant Change Technique. American Journal of Evaluation, 21(2), 137-155.

Franz, N. (2013). Improving Extension programs: Putting public value stories and statements to work. Journal of Extension, 51(3). Retrieved from http://www.joe.org/joe/2013june/tt1.php

Kalambokidis, L. (2011). Spreading the word about Extension’s public value. Journal of Extension, 49(2). Retrieved from http://www.joe.org/joe/2011april/a1.php.

Knowledge Sharing Toolkit. (2014). Most Significant Change. Retrieved from http://www.kstoolkit.org/Most+Significant+Change