Evaluating a program’s design can help improve its effectiveness — University Affairs

Although evaluation is not without its challenges, the information obtained can help to rationalize and target program resources more effectively.

Program evaluation is a valuable tool for program managers seeking to strengthen the quality of their programs and improve results. As program administrators, we design and implement programs to help graduate and postdoctoral students navigate the complexities of academia. This is integral to continuous improvement and accountability to management, program participants and funders. Therefore, when evaluating programs, it is essential to critically examine them by systematically collecting data, analyzing information, and making informed decisions and recommendations.

In his book Program Evaluation: Core Concepts and Examples for Discussion and Analysis, Dean Spaulding defines a program as a set of specific activities designed for a specific purpose, with quantifiable goals and objectives. Each specific activity of a program requires adapted methodologies and evaluation frameworks. However, many of us fail to develop robust evaluation frameworks due to a lack of time and resources. Many front-line administrators also feel that program evaluation is complex, requires specific expertise, and could divert limited resources from program delivery. While this sentiment is not unfounded, as this process can be truly complex and resource-intensive, failure to evaluate programming could lead to missed opportunities to improve the quality of design, execution and results. It may even be counterproductive, as the assessment may reveal opportunities for cost and resource savings that may, in turn, support program growth and development in other areas.

But accountability is the most crucial reason why practitioners should make evaluation a regular habit. As we work with limited resources and funds, and because we are responsible not only for managing these resources, but also for training a new generation of highly qualified professionals, program evaluation gives us tools to ensure that we remain accountable to our institutions and the graduate and postdoctoral students who work there.

According to the National Association of Student Personnel Administrators, to understand the importance of assessment, program administrators can ask the following questions:

  • Can evaluation results influence program decisions?
  • Can the assessment be done in time to be useful?
  • Is the program important enough to merit an evaluation?
  • Is program performance considered problematic?
  • Where is the program in its development?

The answers to these questions can set the stage for choosing which programs to evaluate and selecting the appropriate methodology. the program.

Directors must also choose the direction of their evaluation: inward or outward. When using evaluation to improve program design and implementation – that is, by examining how the program is working – it is useful to periodically evaluate and adapt your activities so that they are as efficient as possible. Being able to identify areas for improvement will ultimately help you achieve your goals more effectively. When you use evaluations to demonstrate program impact—that is, looking outward at what the program does—the information you collect allows you to better communicate your program’s impact to others. , to increase staff morale and to attract and retain the support of current and potential funders.

Assessments fall into two broad categories: formative and summative. Formative evaluations should be conducted during initial program development and implementation and are useful if you want guidance on how to achieve your goals or improve your program. Summative evaluations should be done once the program is well established and will tell you how well the program is achieving its goals. Logic models and evaluation matrices are standard evaluation design methods. A logic model focuses on identifying program resources, activities, outputs, and short, medium, and long-term program outcomes. An evaluation matrix is ​​used to present the program evaluation design and is completed after the program components have been examined using a logic model.

So what makes a good review?

A carefully executed evaluation will benefit all stakeholders more than a hasty and retrospective evaluation. Good evaluation is tailored to the program and builds on existing knowledge and resources. Below are indicators of good evaluation.

Inclusiveness

An inclusive evaluation takes into account a diversity of viewpoints and ensures that the results are as comprehensive and unbiased as possible. Input should come from all those involved in and affected by the evaluation. They can be students, administrators, academics or members of the community.

Objectivity and honesty

Evaluation data can suggest both strengths and limitations of the program. Therefore, it should not be a mere declaration of success and failure of the program, but also a demonstration of the allocation of limited resources to make things better.

Replicability and rigor

A good assessment is likely to be reproducible. Better design and methodology could help draw more accurate conclusions and build trust in others’ conclusions.

Program evaluation is an iterative process. Therefore, things may change as the process unfolds. The purpose and objectives identified at the start of the assessment may change during the design and data collection phases. Although evaluation is not without its challenges, the information gained can help to streamline and target program resources more effectively by devoting time and money to delivering services that benefit program participants. Data on program results can also help secure future funding. Being open to adjustments is extremely helpful, especially for new program evaluators.

Abdul J. Gaspar