Flexible Systematic Approaches Build Evaluation Capacity for Program Staff

By Celeste CarmichaelProgram Development and Accountability Specialist, Cornell Cooperative Extension Administration

“Systematic approaches with flexibility built in to meet local needs”—that is how I would describe ideal program development resources for Extension programs.  Most of our Extension Educators are busy with field responsibilities.  In order to assist with implementation of best practices, resources need to be applicable to broad goals, easy to find, use, and adapt.

For Cornell Cooperative Extension (CCE), Qualtrics has proven to be a systematic yet flexible resource for supporting needs assessments and program evaluations.  There are other options for survey development, but Qualtrics is supported at Cornell for faculty, staff, students and CCE educators.  We have also found Qualtrics to be a good match for any job from very simple to highly complex surveys, and it provides substantive data protection for respondents through secure servers.  One of the other features that makes Qualtrics very attractive is the ability to create a library of sample Cooperative Extension evaluation forms and questions to help Extension Educators get started with survey development. 

Staff have reported that because of time limitations there are instances when evaluation measure development is done in haste just prior to a face to face event.  When created in a hurry questions might not reflect the intended program outcomes and the resulting responses may not be as useful as they could have been otherwise.  Staff also report that survey development can be frozen by simple details that might feel overwhelming when having to develop a survey in short order.  Challenges noted include:

  • Getting the right survey look and feel
  • Developing questions and question order
  • Pilot testing questions
  • Understanding the overall evaluation questions for the program

In order to give more common programs a leg up on building evaluation forms, draft surveys that ask questions connected to how programs reach statewide outcomes are being developed and shared in the Qualtrics Cornell Cooperative Extension survey library.   The draft surveys have a Cooperative Extension header and footer, an appropriate question logic for typical programs, questions and blocks of questions that have been piloted, and questions related to behavioral aspirations and outcomes.  Surveys from the library can be saved into a user’s personal library and adapted as needed.  Additionally survey questions can be individually found in the question bank library.

On using the libraries:

CCE Qualtrics

Qualtrics users will note that “Library” is a tab in the Qualtrics menu where surveys can be saved into a user’s personal account and adapted.  The data collected belong with a user’s personal account and not the library.  A benefit to Qualtrics is the online documentation about using the features including libraries.

Similar options for a systematic approach exist beyond Qualtrics, of course.  The idea is simple—provide a starting point to allow all staff a baseline set of questions to collect data around programs.  When the starting point is adaptable—it builds capacity for the program practitioner to grow into the evaluator, adapting the questions to meet to local needs.  Where Qualtrics or another survey tool is not available, a virtual folder of adaptable documents can help local educators who are doing similar types of programs build around common program outcomes and indicators.

Survey Design: Testing, Monitoring, and Revising

Michael W. Duttweiler
Assistant Director for Program Development and Accountability
Cornell Cooperative Extension
mwd1@cornell.edu

Monica Hargraves
Manager of Evaluation for Extension and Outreach
Cornell Office for Research on Evaluation
mjh51@cornell.edu

Thus far in our four phase process we have been looking ahead – anticipating specific information needs, teasing out the specific types of inquiry that would address those needs, and applying design principles to craft and present specific queries according to best survey practices. Before you step back to gaze upon your amazing creation, remember to heed the prescribed but often shortchanged step of pretesting the survey.

Pretesting Many authors suggest using two types of pretest: one in which the participants know they are pre-testing an instrument and one in which they do not.  The former is an interactive process in which participants can share interpretations and suggestions with the researcher.  In addition to clarifying questions, insights are gained on ease of completion, interest level, sequence, etc.  In the second type of pretest participants complete the survey as it will be implemented in your actual evaluation.  In this case, the sample should resemble your actual sample as closely as possible. Careful review of the information generated will help you know if you are on track to have the information needed to address your evaluation questions.  Narinus (1999) provides practical hints for pretesting.  DeMaio et al.(1998) provides a more formal introduction to pretesting.

Monitoring Especially for a large scale survey, a surprising amount of information may be available during survey implementation.  In web surveys, for example, respondents often will zing an e-mail to survey contacts expressing frustrations or satisfactions with the survey instrument and/or offering additional information. The latter, in particular, can divulge additional questions that might have been useful or indicate that existing questions miss the mark. It may also be appropriate to include a “debriefing” question in the instrument itself such as “Do you have any comments about your experience with this survey?”  It can be challenging to balance respondent observations with what you know to be appropriate instrument designs that generate information you need.  Narinus (1999) said it well:

Remember that your participants are the experts when it comes to understanding your questions. But, you are the ultimate authority. There are times when suggestions made by participants are either impractical or run contrary to the rules of sound methodology. Keep the balance in mind.

Review and Modify The real proof of your design comes with assessment of the utility of the information provided in addressing your evaluation questions.  Response patterns to individual questions such as poor response to open-ended questions or frequent “don’t know” responses, multiple write-in comments on scaled questions, and incomplete forms can suggest needed improvements. Of course, the bottom line assessment will be whether or not the data generated allow you to address your evaluation questions with confidence.

Summary These four posts were aimed at promoting a comprehensive view of survey instrument design based in establishing clear evaluation purposes and information needs, application of established survey design principles, pretesting, monitoring and revision.  Perhaps the unstated essential ingredient throughout was omitted – humility.  It’s unlikely that anyone who has done extensive survey work has avoided the experience of their carefully crafted instruments occasionally missing the mark.  The approach we outlined here helps the evaluator anticipate needs and likely responses and establishes a pattern of continual improvement.  What approaches have worked best for you?

Sources:

Narinus, P. 1999. Get Better Info from All Your Surveys: 13 Important Tips for Pretesting. SPSS, Inc.  http://www.uoguelph.ca/htm/MJResearch/ResearchProcess/PretestingTips.htm

DeMaio, Theresa J., Jennifer Rothgeb, Jennifer Hess, 1998. U.S. Bureau of the Census, Washington, DC 20233 Accessed September 19, 2012 http://www.census.gov/srd/papers/pdf/sm98-03.pdf

Survey Design: Golden Rules of Survey Development

Monica Hargraves
Manager of Evaluation for Extension and Outreach
Cornell Office for Research on Evaluation
mjh51@cornell.edu

Ok, we are FINALLY going to talk about designing surveys.  Just to be clear: the principles discussed here also apply to other types of measurement such as focus group protocols and interview questions. They are relevant whether you are designing an instrument from scratch, or adapting an existing instrument.

There are many good resources for instrument development.  For good overviews of surveys and NON-SURVEY options, see Unit 5 of University of Wisconsin Extension’s “Building Capacity in Evaluating Outcomes”.

Inspiration for the “Golden Rules” presented here comes from various professional sources, but also from personal frustration with the mixed quality and sheer number of surveys we encounter these days. Car dealerships, grocery stores, hotels – everyone asks for feedback these days. Survey fatigue is real, and requires us to be even more mindful in our work.

With lots of technical guidance available, it can be useful to have a short and easier-to-remember list to start from. Here are my boiled-down Golden Rules, with elaboration below:

Respect your respondent

Mind your “EQs” (evaluation questions)

Look ahead (to data management and analysis)

Pilot Test!

 Respect your respondent

  • Use clear, well-worded questions without jargon
  • Avoid double-barreled questions
  • Indicate what type of response you are looking for (if you needs answers in years, say so)
  • Make sure response options cover all possibilities (and anticipate diversity in participants’ potential responses!)
  • Be sensitive to whether the information you’re asking for is readily at hand, or will take time to look up
  • DON’T ask anything you don’t need to
  • Ask first, thank in advance, thank at the end
  • Explain how you will handle and use their input
  • Give them someone to contact
  • Be culturally thoughtful, and sensitive about what could be sensitive
  • Go through the IRB (Human Subjects Review)!

These pointers are not just matters of courtesy – falling short will affect the completeness and quality of your data.

Mind your “EQs” (evaluation questions)

  • Match each survey question to one or more EQs. If some don’t match, revise or delete
  • Assemble all the survey questions associated with each EQ and make sure you will be getting all the info yo
  • u need to answer the EQ
  • Make sure survey items are phrased in a way that will work for your EQ. (Beware of Y/N questions!)

These pointers are to help ensure you’ll get the data you need. Yes/No survey questions can be valuable, but might not work well if you are trying to assess something that might have changed incrementally.  Consider using “To what extent did you …” instead of “Did you …”, because the former might capture small changes that your program did achieve, which would have been lost if respondents were only able to say yes or no.

Look ahead (to data management and analysis)

  • What kind of data will you have?
  • What form will the answers be in, and will you be able to add/average/group/test them as needed?
  • Do you want an odd or even number of categories in a scaled response question?
  • Are you putting open-ended and closed-ended questions to their best use?
  • Do the response categories match the question?
  • Do multiple choice options cover the information you will need?
  • Will you be able to defend your results against claims of “bias” or “leading questions”?

It really pays to “think forward” when you’ve drafted your survey, to make sure that you’ll be able to use the data.

The final Golden Rule, “Pilot Test!”, is the subject of next week’s blog.

Here are two versions of a “Checklist for Newly Developed Surveys” that may be helpful for refining a newly-developed survey. (If prompted for login for either file, just click cancel and the file should appear.)

Microsoft Word Version with Protected Fields for Data Entry (DOCX)

Adobe Acrobat Version (PDF)