Visual Methods in Evaluation: Engaging Participants in Data Collection

Melissa Cater, PhD, Louisiana State University AgCenter

How often do we say “A picture is worth a thousand words”? It’s the touching photograph, the heart-felt video, or the poignant drawing that communicates the impact of our program the best. While we often incorporate visuals into our impact reports, there is another evaluation use for these artifacts. Photographs, videos and art may be employed as a bridge to assist program participants in verbalizing their thoughts, as a symbol of the meanings participants make of the content, or as data illustrating change over time.

Visual methods are extremely useful in participatory evaluations because clientele, or even stakeholders, become as immersed in the evaluation process as the program staff or evaluator. Here are a few ideas for engaging participants in the process:

  • Guide group members in pinpointing the reasons for the evaluation. These reasons could include showcasing program successes, identifying needed program improvements, or assessing needs for future iterations of the program.
  • Assist the group is creating an overall evaluation question which can be answered through pictures taken by the group, artistic media created, or videos produces.
  • Ask program participants to select the best method for answering the evaluation question(s): photography, videography, or art.
  • Support peer-to-peer training in the use of equipment.

The following table lists some ways that visual methods may be used in evaluation. While the topics are very broad, the evaluation ideas isolate specific potential program outcomes, improvements, or needs.

Programs that have the chance to build visual methods into the evaluation plan from the onset are the ideal. However, developing and mature programs have many opportunities to use these methods both to look retrospectively at program outcomes and program improvements and prospectively at participant needs.


What ideas do you have for using participatory visual methods in evaluation? Please share your thoughts in the comment box.


This article, Visual Methods in Evaluation: An Introduction, was originally published Friday, February 15, 2013, on the Evaluation Community of Practice blog, a part of eXtension.   This work is licensed under a Creative Commons Attribution 3.0 Unported License.

Survey Design: Testing, Monitoring, and Revising

Michael W. Duttweiler
Assistant Director for Program Development and Accountability
Cornell Cooperative Extension

Monica Hargraves
Manager of Evaluation for Extension and Outreach
Cornell Office for Research on Evaluation

Thus far in our four phase process we have been looking ahead – anticipating specific information needs, teasing out the specific types of inquiry that would address those needs, and applying design principles to craft and present specific queries according to best survey practices. Before you step back to gaze upon your amazing creation, remember to heed the prescribed but often shortchanged step of pretesting the survey.

Pretesting Many authors suggest using two types of pretest: one in which the participants know they are pre-testing an instrument and one in which they do not.  The former is an interactive process in which participants can share interpretations and suggestions with the researcher.  In addition to clarifying questions, insights are gained on ease of completion, interest level, sequence, etc.  In the second type of pretest participants complete the survey as it will be implemented in your actual evaluation.  In this case, the sample should resemble your actual sample as closely as possible. Careful review of the information generated will help you know if you are on track to have the information needed to address your evaluation questions.  Narinus (1999) provides practical hints for pretesting.  DeMaio et al.(1998) provides a more formal introduction to pretesting.

Monitoring Especially for a large scale survey, a surprising amount of information may be available during survey implementation.  In web surveys, for example, respondents often will zing an e-mail to survey contacts expressing frustrations or satisfactions with the survey instrument and/or offering additional information. The latter, in particular, can divulge additional questions that might have been useful or indicate that existing questions miss the mark. It may also be appropriate to include a “debriefing” question in the instrument itself such as “Do you have any comments about your experience with this survey?”  It can be challenging to balance respondent observations with what you know to be appropriate instrument designs that generate information you need.  Narinus (1999) said it well:

Remember that your participants are the experts when it comes to understanding your questions. But, you are the ultimate authority. There are times when suggestions made by participants are either impractical or run contrary to the rules of sound methodology. Keep the balance in mind.

Review and Modify The real proof of your design comes with assessment of the utility of the information provided in addressing your evaluation questions.  Response patterns to individual questions such as poor response to open-ended questions or frequent “don’t know” responses, multiple write-in comments on scaled questions, and incomplete forms can suggest needed improvements. Of course, the bottom line assessment will be whether or not the data generated allow you to address your evaluation questions with confidence.

Summary These four posts were aimed at promoting a comprehensive view of survey instrument design based in establishing clear evaluation purposes and information needs, application of established survey design principles, pretesting, monitoring and revision.  Perhaps the unstated essential ingredient throughout was omitted – humility.  It’s unlikely that anyone who has done extensive survey work has avoided the experience of their carefully crafted instruments occasionally missing the mark.  The approach we outlined here helps the evaluator anticipate needs and likely responses and establishes a pattern of continual improvement.  What approaches have worked best for you?


Narinus, P. 1999. Get Better Info from All Your Surveys: 13 Important Tips for Pretesting. SPSS, Inc.

DeMaio, Theresa J., Jennifer Rothgeb, Jennifer Hess, 1998. U.S. Bureau of the Census, Washington, DC 20233 Accessed September 19, 2012

Survey Design: Golden Rules of Survey Development

Monica Hargraves
Manager of Evaluation for Extension and Outreach
Cornell Office for Research on Evaluation

Ok, we are FINALLY going to talk about designing surveys.  Just to be clear: the principles discussed here also apply to other types of measurement such as focus group protocols and interview questions. They are relevant whether you are designing an instrument from scratch, or adapting an existing instrument.

There are many good resources for instrument development.  For good overviews of surveys and NON-SURVEY options, see Unit 5 of University of Wisconsin Extension’s “Building Capacity in Evaluating Outcomes”.

Inspiration for the “Golden Rules” presented here comes from various professional sources, but also from personal frustration with the mixed quality and sheer number of surveys we encounter these days. Car dealerships, grocery stores, hotels – everyone asks for feedback these days. Survey fatigue is real, and requires us to be even more mindful in our work.

With lots of technical guidance available, it can be useful to have a short and easier-to-remember list to start from. Here are my boiled-down Golden Rules, with elaboration below:

Respect your respondent

Mind your “EQs” (evaluation questions)

Look ahead (to data management and analysis)

Pilot Test!

 Respect your respondent

  • Use clear, well-worded questions without jargon
  • Avoid double-barreled questions
  • Indicate what type of response you are looking for (if you needs answers in years, say so)
  • Make sure response options cover all possibilities (and anticipate diversity in participants’ potential responses!)
  • Be sensitive to whether the information you’re asking for is readily at hand, or will take time to look up
  • DON’T ask anything you don’t need to
  • Ask first, thank in advance, thank at the end
  • Explain how you will handle and use their input
  • Give them someone to contact
  • Be culturally thoughtful, and sensitive about what could be sensitive
  • Go through the IRB (Human Subjects Review)!

These pointers are not just matters of courtesy – falling short will affect the completeness and quality of your data.

Mind your “EQs” (evaluation questions)

  • Match each survey question to one or more EQs. If some don’t match, revise or delete
  • Assemble all the survey questions associated with each EQ and make sure you will be getting all the info yo
  • u need to answer the EQ
  • Make sure survey items are phrased in a way that will work for your EQ. (Beware of Y/N questions!)

These pointers are to help ensure you’ll get the data you need. Yes/No survey questions can be valuable, but might not work well if you are trying to assess something that might have changed incrementally.  Consider using “To what extent did you …” instead of “Did you …”, because the former might capture small changes that your program did achieve, which would have been lost if respondents were only able to say yes or no.

Look ahead (to data management and analysis)

  • What kind of data will you have?
  • What form will the answers be in, and will you be able to add/average/group/test them as needed?
  • Do you want an odd or even number of categories in a scaled response question?
  • Are you putting open-ended and closed-ended questions to their best use?
  • Do the response categories match the question?
  • Do multiple choice options cover the information you will need?
  • Will you be able to defend your results against claims of “bias” or “leading questions”?

It really pays to “think forward” when you’ve drafted your survey, to make sure that you’ll be able to use the data.

The final Golden Rule, “Pilot Test!”, is the subject of next week’s blog.

Here are two versions of a “Checklist for Newly Developed Surveys” that may be helpful for refining a newly-developed survey. (If prompted for login for either file, just click cancel and the file should appear.)

Microsoft Word Version with Protected Fields for Data Entry (DOCX)

Adobe Acrobat Version (PDF)