AEA2014: “Right-sized” Evaluation

Ben Silliman, Extension Specialist and Professor of Youth, Family, and Community Sciences at North Carolina State University

The thought that recurred for me throughout AEA14 in Denver was the importance of “right-sizing” evaluation. Not everybody needs to be an expert and not every program requires publishable evidence. This theme was apparent from the first morning when Melissa Cater and I hosted a roundtable on evaluating youth program quality. Leaders of many different youth organizations shared stories on how quality is defined, implemented, measured, and valued in a variety of contexts.

Two prominent themes were staff training and stakeholder support. Front-line staff who understand and practice developmentally-appropriate attitudes and skills at point-of-service promote a climate for positive youth development. Evaluation that empowers staff to understand and succeed with youth energizes and informs their work. Mastering a checklist or survey process without grasping its connection to people and programs is just “going through the motions.”

Stakeholders, especially funders, must understand that long-term investments in quality provide the best prospects for reaching performance benchmarks such as school success. Thus the first “right-sizing” is not related to evaluation expertise or generating data for outcomes, but rightly understanding and connecting to participants’ needs. NASCAR owners, who spend millions on high-performance drivers and equipment, understand that a race cannot be won without meticulous attention to “little things” from the driver’s water bottle to the vehicle’s tire wear.

No matter what the program, staff, or stakeholders, “right-sizing” evaluation is about thinking and communicating. Many of this year’s presentations underlined the importance of evaluative thinking, including the disciplines of researching best practice, modeling paths toward outcomes, and reflecting on teachable moments with diverse stakeholders. Equally important is regular communication among program partners, interpretation of contexts, practices, and findings to diverse stakeholders, and growing through communities of practice with peers. To support Youth Program Quality evaluation, I am launching a resource web site here. The site also includes research and tools on Growth and Development and on Evaluation Capacity Building, including links to E-Basics Online Evaluation Training and Discussion forums on Evaluation and Youth Program Quality.

Conferences such as AEA are great for encouragement and insight, but once-a-year is “too low a dosage” to promote personal and professional growth. On my return flight I read Atul Gawande’s “Better” (2007, Picador), a pop book of stories on how evaluative thinking is improving health and medical care.  From the first chapter he underlines the importance of diligence in attending to small actions and thinking about large systems. The closing chapter describes how groups of under-resourced teams in Indian medical clinics finished their 12+ hour days by debriefing “lessons learned,” building resilience in themselves and their patients. He noted how well-resourced Western hospital staff often feel they have no time to reflect and learn together like those village teams.

As important as evaluation may be for accountability or funding, without understanding of people needs and program practices, checklists and reports quickly become “the tail that wags the dog,” rather than the best way to tell that the dog is healthy, happy, and not ready to bite.

 

AEA2014: To Join or Not to Join AEA

By Pennie CrinionDirector of Program Planning and Evaluation, University of Illinois Extension

Ever find yourself wondering if you should renew an Extension Professional Association membership or join another one?  As an administrator most of my career, I’ve seen the addition of three new national professional associations for Extension resulting in a total of seven and have felt pressure in deciding how many to join.

Then when I assumed my current position which included leadership for evaluating programs my predecessor impressed upon me the need to join the American Evaluation Association (AEA). So I registered for the Summer Evaluation Institute held in Atlanta but didn’t formally commit to AEA membership and national conference participation until 2013.

This year I found the conference theme–Visionary Evaluation for a Sustainable, Equitable Future to be particularly interesting in light of the concerns regarding protecting the environment, an issue that has global impact.  Bob Willard, a leading expert on quantifying and promoting the business value of corporate sustainability strategies and core faculty member of the International Society of Sustainability Professionals provided the opening presentation.

His efforts to engage the business community in proactively avoiding risks and capturing opportunities by using smart environmental, social, and governance strategies was insightful and reassured me that  corporations are increasingly recognizing their place at the intersection of global economic, environmental, and equity issues. As Bob shared the business context for corporate social and environmental responsibility, he stressed the importance of standards and benchmarks in linking the corporate world to the field of evaluation.  He highlighted standards that encourage organizations to create positive environmental, social, and economic value so that we have the possibility of sustaining a global economy, society, and ecosystem.  I left convinced that the world renowned evaluation experts who were in attendance and members of AEA would rise to the opportunity he described.

As always, I also appreciated conference opportunities to view the poster session and hundreds of choices offered in 15 concurrent session segments supported by 53 evaluation topical interest groups including the Extension Education Evaluation group. Comradery with Extension colleagues, reasonable registration fees, and opportunities to visit with others in the evaluation field are other great features.

So you may be asking what other benefits would I reap by joining AEA?  Here’s a list for your consideration.

  • Coffee Break demonstrations that are 20 minute long webinars designed to introduce audience members to new tools, techniques, and strategies in the field of evaluation.
  • Professional Development eStudy 3 or 6 hour webinars are provided by evaluation’s top presenters in multiple 90-minute sessions allowing for in-depth exploration of hot topics and questions from the audience.
  • AEA’s active listserv: EVALTALK, with more than 2000 subscribers from around the world that welcome questions and discussion on any topic related to evaluation.
  • AEA365 is dedicated to highlighting Hot Tips, Cool Tricks, Rad Resources, and Lessons Learned with the goal of a daily post from and for evaluators around the globe.
  • Printed or online copies of the American Journal of Evaluation and New Directions in Evaluation.
  • Job posting opportunities.

So visit www.eval.org and explore AEA membership.

AEA2014: Internal v. External Evaluator Perspecitves

By Brigitte Scott, Evaluation and Research Specialist for the Military Families Learning Network

Greetings! I’d like to use this blog post to introduce myself as the new Southern Region rep for the AEA Extension Education Evaluation Topical Interest Group (TIG) Board, and to share some thoughts from #eval14 as well.

I have been working with the Military Families Learning Network (MFLN) as the evaluation and research lead for almost a year now. Outside of my “intro” evaluation work during my post-doc, this is my first professional evaluation position. My graduate degrees are in curriculum and instruction, and I have worked extensively with qualitative research methodologies. As an educator and a researcher, I am hard-wired for an ethic of care in my evaluation work, and I have a very high tolerance for complexity and “messiness.” All of these things greatly inform my work with MFLN.

I am an internal evaluator, and as such, I’m a part of the MFLN leadership team, have some supervisory responsibilities, and have a heavy hand in processes across the network. Evaluation permeates all that I do within the MFLN. And while I have often been referred to as the “one-(wo)man evaluation pop-up-shop” for the MFLN, my goal is to support evaluation and evaluative thinking as nested activities within the daily processes of all the individuals who are a part of our learning network. I know I speak for my MFLN colleagues when I say we are each committed to our work, to the communities we serve, and to the MFLN mission. We want to do the best we can do as professionals because we believe in our work and we believe in the value of Cooperative Extension.

With all this in mind, I would say one notable impression #eval14 made on me is related to the impact internal and external evaluation perspectives bring to our work—and what this means for evaluation in Cooperative Extension context. Although I’m aware that perhaps the majority of evaluators work from an external perspective, this was the first time I had a firm sense of what the external evaluation perspective looks and sounds like “on the ground.” I heard a lot of terms like “treatment,” “randomized control,” and “intervention.” Of course, these aren’t new terms for me in the context of education research or evaluation. And certainly these terms and their methodologies and paradigms have their place in evaluation work. But the thought of using these terms (and their accompanying methodologies) in my work felt . . . jarring. I know that not all Extension evaluators are internal to the programs they evaluate. But I’m thinking that as Extension professionals, we all have a certain level of “buy-in” and belief in the Cooperative Extension System, what it stands for, and what we do collectively for and within our communities. I would venture to say that as we work within our communities, we have a certain involvement that transcends, even complicates, our evaluation work. Though this external “rhetoric” prompted a certain type of reaction in me, it also reminded me of how susceptible to (positive) bias I can be as an internal evaluator to work that I believe in and am invested in intellectually and programmatically. It was a strange new entre for me into reflexive thinking on my own evaluation practice. But there it was. And I wonder, from all of you: What do you think? How does your commitment to the mission and values of Extension “flavor” your evaluation work? How does your disciplinary background, and even your own professional “positioning” impact your methodologies? Your findings? Your evaluation reports?

It was great meeting some of you at the Extension Education Evaluation (EEE) TIG meeting, and as I understand, it won’t be long before I’ll be helping to review EEE submissions for #eval15. I look forward to getting to know you all and the work that you do not only through our AEA TIG, but also in our collective work as evaluators within Cooperative Extension. Feel free to connect at any time, and if you are attending NAEPSDP (the National Association of Extension Program and Staff Development Professionals) in December in San Antonio, please look for me! We’ll grab a coffee and wax reflexive…!

 

Until then,

Brigitte Scott

brigit2@vt.edu

@4ed_eval