Survey Design: Refining Evaluation Questions

Monica Hargraves
Manager of Evaluation for Extension and Outreach
Cornell Office for Research on Evaluation

The Evaluation Purpose Statement and especially the Evaluation Questions described in the previous post are absolutely essential for guiding an evaluation.  Ironically they are often somewhat unfamiliar to educators – but skipping them invites exactly the kind of data surprises and disappointments that can be so frustrating.  This week’s post focuses on how to refine the Evaluation Questions in order to really keep your evaluation on target

An important first step is to get attuned to how much of a difference wording makes.  Using the example from last week, consider the following variation on EQ2, which is about the 2012 Master Forest Owners Workshop:

EQ2: How well did the workshop prepare the volunteers for this role (of extending knowledge gained at the workshop in their communities)?

EQ2a: How well-prepared were the workshop participants for this role?

The two versions sound fairly similar, but have quite different implications for your evaluation.  The first version is essentially asking, “How effective was the workshop for this group?”  If self-reports by participants are sufficient for your needs, then a survey soliciting their assessments of how well the workshop prepared them would be appropriate.  However if you needed more objective evidence, perhaps to satisfy funder needs, you would have to compare pre- and post- measures of preparedness, and have some kind of comparison or control group.  Favorable results would allow you to attribute their preparedness to the workshop.

Version 2a is simply inquiring whether those who completed the training are well-prepared, regardless of their incoming knowledge and skills or whatever other trainings they might be doing.  This would only require post-tests of some kind. Favorable results would allow you to make claims that graduates of the 2012 workshop were well-prepared.

Both versions of EQ2 are legitimate evaluation questions.  The point is that it is important to word the EQ in a way that matches the purpose of your evaluation. As you can see above, phrasing makes a big difference for what kind of evaluation you will end up doing and what kind of evaluation instrument you will need.

Once you’ve refined your purpose and the overall phrasing of the EQ, the next task is to get really precise about the “constructs” in your evaluation questions.  This is a fancy way of saying that you need to be very clear about what you are interested in. This seems obvious of course, but fuzziness here will make for fuzzier evaluations and less useful data.

In any evaluation question there is at least one “construct” — an idea that you put into words, that you are going to try to measure. Making that idea precise is important, and has huge implications for your evaluation instrument.  Let’s look at EQ2a again:

EQ2a: How well-prepared were the workshop participants for this role (of extending knowledge gained at the workshop in their communities)?

What does “well-prepared” mean, exactly? What would it look like if you saw it (or noticed its absence)? Does it refer to whether the volunteers come prepared with appropriate handouts, pens and pencils when doing community outreach? Does it refer to their basic teaching skills and ability to work with in non-formal community settings? Does it refer to their level of forestry knowledge that allows them to answer people’s questions? … The list of possible interpretations could be quite long.  What is it that YOU (and your stakeholders) have in mind?

Here’s another construct to be clarified: What is “extending knowledge gained at the workshop in their communities”? If a volunteer wrote an article on forest management for a local community paper, would that count? (If so, then your evaluation instrument would have to assess the accuracy or comprehensiveness of what was written). Maybe you want to focus on direct in-person outreach efforts.  Even that would need to be refined: would a forestry conversation with a neighbor over a backyard fence count? Or would you restrict attention (and your measurement) to group workshops that a newly-trained volunteer holds?

Once you’ve decided what exactly you are interested in, and you have a sense of what it might look like in practice, you are ready to design your evaluation instrument.  We will turn to that step in next week’s post.

Resource: For more on evaluation purpose and questions, see sections 3.02 and 3.03 of the Guide to the Systems Evaluation Protocol (a free pdf version is available from the Cornell Office for Research on Evaluation at Guide.)  Appendices XX – XXIII contain worksheets that can help with EQ development.

This article (Survey Design: Refining Evaluation Questions) was originally published Friday, October 12, 2012 on the Evaluation Community of Practice blog, a part of eXtension.   This work is licensed under a Creative Commons Attribution 3.0 Unported License.