Participatory Data Analysis

By Corey Newhouse (Public Profit) and Kylie Hutchinson (Community Solutions)

Earlier this year we held our first webinar on Participatory Data Analysis for Evaluators. In the field of evaluation, which is growing by leaps and bounds and continually innovating, there’s surprisingly little written about this area. Also known as data parties, sense-making sessions, results-briefings, and data-driven reviews, participatory data analysis plays an important role in promoting evaluation use. In this post we’ll describe briefly what participatory data analysis is, and several reasons why you should consider it seriously for your practice.

What is it?ppt data analysis (1)

Participatory data analysis can take many forms, but essentially it’s an opportunity for you the evaluator to consult with key stakeholders regarding the preliminary data and analyses. It’s an opportunity to see how stakeholders understand and interpret the data collected by the evaluation, and possibly an opportunity to learn important additional contextual information.

Why is it helpful?

  1. People support what they helped create.

This quote by Richard Beckhard[1] says it all. When stakeholders play an active role in interpreting the findings, we believe they are more likely to develop ownership of the evaluation and implement the recommendations later on. A 2009 survey[2] by Dreolin Fleischer and Tina Christie of Claremont University found that 86% of American Evaluation Association members believed that the involvement of stakeholders in the evaluation process was an influential or extremely influential factor of greater utilization. Who can say no to that?

  1. Every evaluator needs a reality check.

Participatory data analysis not only ensures that, as evaluators, we arrive at the correct conclusions, but also that our recommendations hit the mark. We’re (usually) not program staff and lack the in-depth day-to-day familiarity with a program that our evaluands have. We need their input to indicate which findings are the most meaningful and to suggest recommendations we might never have thought of on our own.  Key stakeholders can suggest appropriate wording for these recommendations and in the process we can ensure there is consensus on the conclusions.

  1. Assure the evaluation will reach key stakeholders

Data parties are also a great opportunity to get stakeholder input on which forms of reporting are best for which stakeholders. They can tell us not only who should get the report and by when (to meet key decision-making cycles), but also who actually has the power to act. In this fast-paced, mobile age, evaluators need as much help figuring out how to reach their target audience as they can.

  1. Look Ma, I’m capacity-building!

A wonderful thing happens during participatory data analysis. At some point along the way, we’re building evaluation capacity in a hands-on and directly relevant way for our stakeholders.

  1. Avoid “gotcha” surprises for your client

Sometimes evaluations surface less-than-great findings for the client. Data parties are a good opportunity to share negative findings early, rather than saving the bad news for the final report – which can feel like a set-up for your client.

We have found data parties to be a great way to engage our clients in the sense-making process, which in turn yields more actionable recommendations and builds clients’ support for the evaluation as a whole. Data parties can be large affairs, with lots of people spending hours and hours pouring over results. They can also be briefer, smaller sessions with just a few stakeholders and a few data points. The most important thing is to get the (data) party started!

Not sure where to begin? Check out Public Profit’s free guide, Dabbling in the Data. It has step-by-step instructions for 15 team-based data analysis activities that are just right for your next data party. Or download Kylie’s one-page cheat sheet on Data Parties. Party on!

 

[1] Richard Beckhard (1969). Organization development: strategies and models. Reading, Mass.: Addison-Wesley. p. 114.

[2] Fleischer, D.N., & Christie, C.A. (2009). Evaluation use: Results from a survey of U.S. American Evaluation Association members. American Journal of Evaluation, 30(2): 158-175.