Survey Design: Refining Evaluation Questions

Monica Hargraves
Manager of Evaluation for Extension and Outreach
Cornell Office for Research on Evaluation
mjh51@cornell.edu

The Evaluation Purpose Statement and especially the Evaluation Questions described in the previous post are absolutely essential for guiding an evaluation.  Ironically they are often somewhat unfamiliar to educators – but skipping them invites exactly the kind of data surprises and disappointments that can be so frustrating.  This week’s post focuses on how to refine the Evaluation Questions in order to really keep your evaluation on target

An important first step is to get attuned to how much of a difference wording makes.  Using the example from last week, consider the following variation on EQ2, which is about the 2012 Master Forest Owners Workshop:

EQ2: How well did the workshop prepare the volunteers for this role (of extending knowledge gained at the workshop in their communities)?

EQ2a: How well-prepared were the workshop participants for this role?

The two versions sound fairly similar, but have quite different implications for your evaluation.  The first version is essentially asking, “How effective was the workshop for this group?”  If self-reports by participants are sufficient for your needs, then a survey soliciting their assessments of how well the workshop prepared them would be appropriate.  However if you needed more objective evidence, perhaps to satisfy funder needs, you would have to compare pre- and post- measures of preparedness, and have some kind of comparison or control group.  Favorable results would allow you to attribute their preparedness to the workshop.

Version 2a is simply inquiring whether those who completed the training are well-prepared, regardless of their incoming knowledge and skills or whatever other trainings they might be doing.  This would only require post-tests of some kind. Favorable results would allow you to make claims that graduates of the 2012 workshop were well-prepared.

Both versions of EQ2 are legitimate evaluation questions.  The point is that it is important to word the EQ in a way that matches the purpose of your evaluation. As you can see above, phrasing makes a big difference for what kind of evaluation you will end up doing and what kind of evaluation instrument you will need.

Once you’ve refined your purpose and the overall phrasing of the EQ, the next task is to get really precise about the “constructs” in your evaluation questions.  This is a fancy way of saying that you need to be very clear about what you are interested in. This seems obvious of course, but fuzziness here will make for fuzzier evaluations and less useful data.

In any evaluation question there is at least one “construct” — an idea that you put into words, that you are going to try to measure. Making that idea precise is important, and has huge implications for your evaluation instrument.  Let’s look at EQ2a again:

EQ2a: How well-prepared were the workshop participants for this role (of extending knowledge gained at the workshop in their communities)?

What does “well-prepared” mean, exactly? What would it look like if you saw it (or noticed its absence)? Does it refer to whether the volunteers come prepared with appropriate handouts, pens and pencils when doing community outreach? Does it refer to their basic teaching skills and ability to work with in non-formal community settings? Does it refer to their level of forestry knowledge that allows them to answer people’s questions? … The list of possible interpretations could be quite long.  What is it that YOU (and your stakeholders) have in mind?

Here’s another construct to be clarified: What is “extending knowledge gained at the workshop in their communities”? If a volunteer wrote an article on forest management for a local community paper, would that count? (If so, then your evaluation instrument would have to assess the accuracy or comprehensiveness of what was written). Maybe you want to focus on direct in-person outreach efforts.  Even that would need to be refined: would a forestry conversation with a neighbor over a backyard fence count? Or would you restrict attention (and your measurement) to group workshops that a newly-trained volunteer holds?

Once you’ve decided what exactly you are interested in, and you have a sense of what it might look like in practice, you are ready to design your evaluation instrument.  We will turn to that step in next week’s post.

Resource: For more on evaluation purpose and questions, see sections 3.02 and 3.03 of the Guide to the Systems Evaluation Protocol (a free pdf version is available from the Cornell Office for Research on Evaluation at Guide.)  Appendices XX – XXIII contain worksheets that can help with EQ development.

This article (Survey Design: Refining Evaluation Questions) was originally published Friday, October 12, 2012 on the Evaluation Community of Practice blog, a part of eXtension.   This work is licensed under a Creative Commons Attribution 3.0 Unported License.

Tending to the Forest and the Trees in Survey Design

Michael W. Duttweiler
Assistant Director for Program Development and Accountability
Cornell Cooperative Extension
mwd1@cornell.edu

Monica Hargraves
Manager of Evaluation for Extension and Outreach
Cornell Office for Research on Evaluation
mjh51@cornell.edu

With many helpful resources available for designing survey instruments one might think that instrument design would be among the easiest parts of evaluation planning.  Yet even seasoned educators and evaluators routinely are surprised or disappointed by the data their carefully considered instruments yield.  There must be more to it.  What we present in this month’s set of four blog posts is a combination of steps we have found effective in sharpening data gathering.  There is nothing particularly novel in our approach; each of the steps likely will be familiar to you.  Rather, it is the disciplined application of the steps that we have found to improve the value of information gathered.  These posts are meant to share our experience and invite your observations and additional suggestions.

Each post will describe one of four phases in the design and revision process.  In general terms, they are:

  1. Developing a Precise Evaluation Purpose Statement and Evaluation Questions
  2. Identifying and Refining Survey Questions
  3. Applying Golden Rules for Instrument Design
  4. Testing, Monitoring and Revising

 

With that introduction, we first address the critical roles and value of developing an evaluation purpose statement and associated evaluation questions. The essence of phase 1 is that you must know precisely what questions you are trying to answer before developing the questions you need to ask.

In our parlance, an Evaluation Purpose Statement is a one-paragraph description of your evaluation effort.  It describes what is and is not being evaluated and the goal or purposes of the evaluation.  It sets boundaries by including a description of the program elements and time frame being considered, which audiences are being addressed, and which goals or objectives are of most interest. Equally important, it identifies major elements of the program which are not being assessed.

Example: The purpose of this evaluation is to assess the effectiveness of the 2012 Master Forest Owners Workshop in supporting and prompting volunteers to extend their knowledge to other forest owners in their local communities.  A secondary purpose is to provide documentation and assessment information for use by persons considering replicating the model with other forest owner groups.  Considerations include program structure and processes, curricular choices, and short-term impact assessment.  Other means of supporting forest management volunteers such as our newsletter and quarterly conference calls will not be assessed.  

In a collaborative evaluation setting (aren’t most?), it is essential that all key partners agree to the purpose statement.  Also important is to assure that “nice to know” purposes don’t creep in – the statement should convey your essential intent in conducting the evaluation. Having the agreed-upon Purpose Statement written down is very helpful later on, for keeping the evaluation on target.

With the purpose statement in hand, you can move on to identifying evaluation questions.  The core Evaluation Questions (EQs) are the small number of essential questions that must be answered in order to meet your evaluation purposes. Continuing the Example above:

EQ1) To what extent did the forest management volunteers attending the 2012 Master Forest Owners Workshop extend knowledge gained at the workshop in their communities?

EQ2) How well did the workshop prepare the volunteers for this role? 

EQ3) What critical information will be needed by those looking to replicate the workshop?

The purpose statement and core evaluation questions are the springboards for the evaluation planning process, allowing you to consider who has the required information, how it might be collected, and what methods and instruments will be employed.  Both should be revisited and refined throughout the design process.

Our next post explores the importance of refining your initial evaluation questions.

 

This article (Tending to the Forest and the Trees in Survey Design) was originally published Friday, October 5, 2012 on the Evaluation Community of Practice blog, a part of eXtension.   This work is licensed under a Creative Commons Attribution 3.0 Unported License.