AEA2014: “Right-sized” Evaluation

Ben Silliman, Extension Specialist and Professor of Youth, Family, and Community Sciences at North Carolina State University

The thought that recurred for me throughout AEA14 in Denver was the importance of “right-sizing” evaluation. Not everybody needs to be an expert and not every program requires publishable evidence. This theme was apparent from the first morning when Melissa Cater and I hosted a roundtable on evaluating youth program quality. Leaders of many different youth organizations shared stories on how quality is defined, implemented, measured, and valued in a variety of contexts.

Two prominent themes were staff training and stakeholder support. Front-line staff who understand and practice developmentally-appropriate attitudes and skills at point-of-service promote a climate for positive youth development. Evaluation that empowers staff to understand and succeed with youth energizes and informs their work. Mastering a checklist or survey process without grasping its connection to people and programs is just “going through the motions.”

Stakeholders, especially funders, must understand that long-term investments in quality provide the best prospects for reaching performance benchmarks such as school success. Thus the first “right-sizing” is not related to evaluation expertise or generating data for outcomes, but rightly understanding and connecting to participants’ needs. NASCAR owners, who spend millions on high-performance drivers and equipment, understand that a race cannot be won without meticulous attention to “little things” from the driver’s water bottle to the vehicle’s tire wear.

No matter what the program, staff, or stakeholders, “right-sizing” evaluation is about thinking and communicating. Many of this year’s presentations underlined the importance of evaluative thinking, including the disciplines of researching best practice, modeling paths toward outcomes, and reflecting on teachable moments with diverse stakeholders. Equally important is regular communication among program partners, interpretation of contexts, practices, and findings to diverse stakeholders, and growing through communities of practice with peers. To support Youth Program Quality evaluation, I am launching a resource web site here. The site also includes research and tools on Growth and Development and on Evaluation Capacity Building, including links to E-Basics Online Evaluation Training and Discussion forums on Evaluation and Youth Program Quality.

Conferences such as AEA are great for encouragement and insight, but once-a-year is “too low a dosage” to promote personal and professional growth. On my return flight I read Atul Gawande’s “Better” (2007, Picador), a pop book of stories on how evaluative thinking is improving health and medical care.  From the first chapter he underlines the importance of diligence in attending to small actions and thinking about large systems. The closing chapter describes how groups of under-resourced teams in Indian medical clinics finished their 12+ hour days by debriefing “lessons learned,” building resilience in themselves and their patients. He noted how well-resourced Western hospital staff often feel they have no time to reflect and learn together like those village teams.

As important as evaluation may be for accountability or funding, without understanding of people needs and program practices, checklists and reports quickly become “the tail that wags the dog,” rather than the best way to tell that the dog is healthy, happy, and not ready to bite.