
This sentence from Prentice Zinn’s December 2017 AEA365 blog post really piqued my interest.
Zinn discusses common areas of tension such as lack of funding for evaluation and outcomes anxiety. I appreciate this perspective, and believe some tension may stem from the feeling that evaluation is a mandate rather than something grantees want to do. The difference between ‘must’ and ‘want’ is the difference between ‘tension’ and ‘willingness’.
How does the shift happen? Build a culture of evaluation.
Funders who embrace program evaluation as a learning opportunity have the chance to partner with grantees to explore continuous program improvement together.
When outcomes are created collaboratively, data collection tools are agreed upon, and grantees are compensated for the extra time required to engage in evaluation activities, tension can dissipate.
In order for this to happen though, all parties must believe that the reason for program evaluation is to learn how to best meet the needs of the people being served. There must be a mutual willingness to shift away from thinking ‘we have to do evaluation because it’s required’ to a new mindset of ‘we want to do evaluation to learn’.
For more reading about how to begin to build a culture of evaluation right now, check out our free white paper Building a Culture of Evaluation.

How I write program evaluation reports has evolved over the years. Text and tables is what I was taught, with lots and lots of details – resulting in lots and lots of pages.
I still write the traditional technical report, with a little more visualization than in the past. I also write/design an impact report collaboratively with the client. As a part of communicating the findings publicly, we pull important data, how they will use the data, and integrate visualizations throughout. (shout out to @Evergreendata for teaching me the visual ways).
I am not a graphic designer. I create a design concept; placing the content in tables with lot of design notes. Such as ‘put a bubble graph here’ and ‘place a photo of a manufactured home park here’. I also share the client’s logo and brand colors, to ensure the stakeholder report reflects the organization.
We are currently doing a project with The Meyer Memorial Trust (MMT) to conduct a cross-site evaluation of MMT’s Affordable Housing Initiative (AHI) Manufactured Home Repair Program (MHRP). This year one impact report summarizes year one findings, including participation, outcomes, overall impact, successes and challenges in 8 visual pages.
Join me on Tuesday, February 5, 8:30 – 10am for an interactive workshop on logic models. Hosted by WVDO.
Some of my favorite resources when it comes to reporting writing and/or data viz.

Outcome statements are change statements. They are critical to ensure you are collecting data that will inform program improvement efforts. They address the key question:
The statements provide the foundation from which all data collection questions will stem. Too often, organizations jump into creating surveys without outcome statements. The result can be asking questions that have nothing to do with understanding program impact or measuring progress. Like throwing darts at a board, hoping something will stick.
The process of creating measurable outcomes requires time, planning and collaboration. They provide direction, so when you’re ready to collect data what you ask is aligned to the outcome statements. You’ll throw darts, and hit a bullseye.
| What Does the Program Do? | What Change is Expected as a Result? | ||
|---|---|---|---|
| Program Activity | Outcome | Outcome | |
| Provide math and science classroom activities for at risk students | Students will improve their attitude toward math and science | Students will increase their interest in math and science | |
At a bare minimum, development and program staff come together to create these statements. Ideally, others are at the table depending upon the size of your organization. Development staff members can use outcome statements in their grant proposals and other fundraising activities as applicable. Program staff typically are the ones to collect the data. The result – development staff have the data they need to report to funders, and program staff have data they need to understand program successes and challenges. Bullseye!
This brief mini guide series goes into more detail on several evaluation subjects. The first one highlights how to create logic models and measurable outcomes.

Collaboration is a reoccurring theme in what I tell my clients to do. It’s just as important for me as a practitioner to continue to collaborate with fellow evaluators. I recently came across the Evaluation Wrecking Crew on an AEA 365blog post tip of the day. I’ve worked with several Science, Technology, Engineering and Math (STEM) organizations, and was intrigued by the following excerpt:
Thanks to the American Evaluation Association (AEA) for creating so many opportunities for evaluators to connect and collaborate. It’s true. On my own, I’m a drop in a bucket. Working alongside other evaluators toward a common goal, we may create an ocean.
Chari accurately captured the fundamental goals and mission of our organization and transformed our input into a clear evaluation process that helps us assess the impact of our programs on the lives of the families that we serve. Now we have an amazing way to measure the physical, emotional, and mental effects of our programs and to guide change, ensuring that we are delivering services in the most effective way possible.
Brandi Tuck, Executive Director, Path Home
Get periodic emails with useful evaluation resources and industry updates.
We promise not to spam you!