W08: Evaluation as a tool to Strengthening Programs: A Primer for the Non Evaluator

DePass explained some of the reason for evaluating programs. Evaluation plays a critical role in determining how many program objectives have been accomplished. It helps determine if specific activities or approaches are contributing to a program’s success. As STEM educators move towards more integrative and active modes of teaching and learning, evaluation can help them ensure that the experiences they are designing are leading to the intended student learning outcomes.

Evaluations should be considered from the beginning of a project, DePass explained. The first step is to identify appropriate and essential measures, and the measurements should be designed so that they do not themselves impact the program. “Simply because you can measure something doesn’t necessarily mean it’s important,” DePass said.

As an example involving recruitment, DePass suggested that a program might go beyond the number of students recruited to measuring the success of different forms of recruitment, such as advertisements, mailings, or word of mouth. Knowing which avenue was most successful, he said, will help improve recruitment in the next go-round, either by abandoning weaker methods or by finding ways to make methods more effective.

Evaluations should be part of every grant proposal, he added, because they can help reveal if a project is well organized and realistic in what it aims to achieve.

Elisabeth Russell McKenzie, a program administrator at Temple University, discussed the use of logic models to build evaluations. A logic model begins by identifying the situation: the need a program is addressing, symptoms and problems, and which groups of stakeholders are most important. It then looks at the mission, vision, and goals of the program.

A model also needs to incorporate resources: funding, laboratories, programmatic support, materials, equipment, and people. The final piece is the intended outcomes of the project. In this last area, Russell McKenzie pointed out, time becomes a factor. Short-term outcomes happen quickly, medium-term outcomes are achievable within two or three years, and long-term outcomes look at the big picture and are aligned more with the vision for the project or program.

A logic model makes evaluation and planning easier by laying out framework and timeline for a program. For example, an implementation evaluation, which should happen shortly after a program begins, can take stock of whether all the expected resources are in place. A process evaluation can look at whether the program is working as predicted. An outcomes evaluation can determine whether the goals have been met.

“At the end of the day it’s all about alignment,” DePass said. Priorities, goals and resources all need to work together to give a program the best chance of success.

Both qualitative and quantitative measures have value, DePass explained, but whereas quantitative measures can be fairly straightforward, qualitative evaluation often seems more complex. He suggested looking for both interview and survey questions and other instruments that already have been validated. However, the groups used to validate a tool need to be similar to the group being evaluated.

The people who run the programs are not usually the best people to do qualitative evaluation, he added. Also, having some level of anonymity for survey respondents can be valuable in getting truthful responses.

DePass encouraged workshop participants to make sure that they disseminate the information learned through evaluations. Dissemination gives programs credit for adding to the body of literature and leads to a more disciplined approach to evaluation.

Don't forget to join us in San Antonio for the 2017 UI Conference!

Our 9th Conference on Understanding Interventions that Broaden Participation in Science Careers will be held at the Sheraton Gunter Hotel, in downtown San Antonio.