Summative evaluation informs judgments about whether the program worked i. Outcome evaluation focuses on the observable conditions of a specific population, organizational attribute, or social condition that a program is expected to have changed.
Whereas outcome evaluation tends to focus on conditions or behaviors that the program was expected to affect most directly and immediately i. For example, assessing the strategies used to implement a smoking cessation program and determining the degree to which it reached the target population are process evaluations.
Reduction in morbidity and mortality associated with cardiovascular disease may represent an impact goal for a smoking cessation program Rossi et al.
Several institutions have identified guidelines for an effective evaluation. For example, in , CDC published a framework to guide public health professionals in developing and implementing a program evaluation CDC, Although the components are interdependent and might be implemented in a nonlinear order, the earlier domains provide a foundation for subsequent areas. They include:. Five years before CDC issued its framework, the Joint Committee on Standards for Educational Evaluation created an important and practical resource for improving program evaluation.
The Joint Committee, a nonprofit coalition of major professional organizations concerned with the quality of program evaluations, identified four major categories of standards — propriety, utility, feasibility, and accuracy — to consider when conducting a program evaluation.
Propriety standards focus on ensuring that an evaluation will be conducted legally, ethically, and with regard for promoting the welfare of those involved in or affected by the program evaluation.
In addition to the rights of human subjects that are the concern of institutional review boards, propriety standards promote a service orientation i. Utility standards are intended to ensure that the evaluation will meet the information needs of intended users. Involving stakeholders, using credible evaluation methods, asking pertinent questions, including stakeholder perspectives, and providing clear and timely evaluation reports represent attention to utility standards.
Accuracy: Will the evaluation produce findings that are valid and reliable, given the needs of those who will use the results? Sometimes the standards broaden your exploration of choices. Often, they help reduce the options at each step to a manageable number.
Feasibility How much time and effort can be devoted to stakeholder engagement? Propriety To be ethical, which stakeholders need to be consulted, those served by the program or the community in which it operates? Accuracy How broadly do you need to engage stakeholders to paint an accurate picture of this program? Similarly, there are unlimited ways to gather credible evidence Step 4. Asking these same kinds of questions as you approach evidence gathering will help identify ones what will be most useful, feasible, proper, and accurate for this evaluation at this time.
Thus, the CDC Framework approach supports the fundamental insight that there is no such thing as the right program evaluation. Rather, over the life of a program, any number of evaluations may be appropriate, depending on the situation. Good evaluation requires a combination of skills that are rarely found in one person. The preferred approach is to choose an evaluation team that includes internal program staff, external stakeholders, and possibly consultants or contractors with evaluation expertise.
An initial step in the formation of a team is to decide who will be responsible for planning and implementing evaluation activities. One program staff person should be selected as the lead evaluator to coordinate program efforts. This person should be responsible for evaluation activities, including planning and budgeting for evaluation, developing program objectives, addressing data collection needs, reporting findings, and working with consultants.
The lead evaluator is ultimately responsible for engaging stakeholders, consultants, and other collaborators who bring the skills and interests needed to plan and conduct the evaluation. Although this staff person should have the skills necessary to competently coordinate evaluation activities, he or she can choose to look elsewhere for technical expertise to design and implement specific tasks.
However, developing in-house evaluation expertise and capacity is a beneficial goal for most public health organizations. The lead evaluator should be willing and able to draw out and reconcile differences in values and standards among stakeholders and to work with knowledgeable stakeholder representatives in designing and conducting the evaluation.
Seek additional evaluation expertise in programs within the health department, through external partners e. You can also use outside consultants as volunteers, advisory panel members, or contractors. External consultants can provide high levels of evaluation expertise from an objective point of view.
Important factors to consider when selecting consultants are their level of professional training, experience, and ability to meet your needs. Be sure to check all references carefully before you enter into a contract with any consultant. To generate discussion around evaluation planning and implementation, several states have formed evaluation advisory panels. Advisory panels typically generate input from local, regional, or national experts otherwise difficult to access.
Such an advisory panel will lend credibility to your efforts and prove useful in cultivating widespread support for evaluation activities. Evaluation team members should clearly define their respective roles. Informal consensus may be enough; others prefer a written agreement that describes who will conduct the evaluation and assigns specific roles and responsibilities to individual team members.
Either way, the team must clarify and reach consensus on the:. This manual is organized by the six steps of the CDC Framework. Each chapter will introduce the key questions to be answered in that step, approaches to answering those questions, and how the four evaluation standards might influence your approach.
The main points are illustrated with one or more public health examples that are composites inspired by actual work being done by CDC and states and localities. Together, they build a house over a multi-week period. At the end of the construction period, the home is sold to the family using a no-interest loan. Lead poisoning is the most widespread environmental hazard facing young children, especially in older inner-city areas. Even at low levels, elevated blood lead levels EBLL have been associated with reduced intelligence, medical problems, and developmental problems.
The main sources of lead poisoning in children are paint and dust in older homes with lead-based paint. Public health programs address the problem through a combination of primary and secondary prevention efforts.
A typical secondary prevention program at the local level does outreach and screening of high-risk children, identifying those with EBLL, assessing their environments for sources of lead, and case managing both their medical treatment and environmental corrections. However, these programs must rely on others to accomplish the actual medical treatment and the reduction of lead in the home environment.
A common initiative of state immunization programs is comprehensive provider education programs to train and motivate private providers to provide more immunizations. A typical program includes a newsletter distributed three times per year to update private providers on new developments and changes in policy, and provide a brief education on various immunization topics; immunization trainings held around the state conducted by teams of state program staff and physician educators on general immunization topics and the immunization registry; a Provider Tool Kit on how to increase immunization rates in their practice; training of nursing staff in local health departments who then conduct immunization presentations in individual private provider clinics; and presentations on immunization topics by physician peer educators at physician grand rounds and state conferences.
Minimalist theory of evaluation: The least theory that practice requires. American Journal of Evaluation ; The Evolution of a Social Work Researcher. Reflections on War. Like this article? Thank you for sharing! Engaging in evidence-based research to support the viability of any program is acknowledged by funders to be vitally important to address such issues as accountability, credibility and, of course, sustainability.
If program evaluation is, theoretically, seen as important, why do so few organizations engage in it? The difficulty might be associated with the perceived barriers to conducting such research—barriers that might include time, lack of willing personnel, or lack of knowledge of how to proceed. The purpose of this article is to provide an example of a program evaluation and, subsequently, explain clearly and concisely how program evaluation can be done in-house by existing personnel.
Specific procedures will be addressed that can be followed and replicated. The results of the program evaluation can be used to enhance, refine, publicize, or support the request for grants and awards.
However, the evaluator should have a knowledge of basic descriptive statistics Many undergraduate and graduate programs incorporate, at least, one research course that usually includes a module on statistics, or requires the completion of a statistics course as a prerequisite or co-requisite to enrolling in the research course.
This exposure to statistical and research methodology should provide a foundation for the evaluator to begin. Gibbs , too, stated that data collection need not be elaborate and time consuming. Unra, Garbor, and Gainnell maintain that outcome evaluation is a practical activity p. Determining the research questions, i. Reviewing the literature to support or refute the research question evidence-based research and investigate relevant techniques that have proven reliability and validity.
Research-based literature can be found on many search engines. The agency can edit an in-house measure to conform to a Likert scale for example: very frequently, somewhat frequently, occasionally, not at all for ease of measurement.
Evaluating the results through the use of descriptive measures that report on the pre-test and post-test means. If the agency wishes to examine anecdotal information regarding the program, open-ended questions can be designed to analyze specific content and themes important to the agency.
This qualitative research can both reinforce what aspects of the program are successful and what may need to be modified for future participants. In , we conducted a program evaluation of an eight-week educational peer support program in Pennsylvania.
Quantitatively, a pre-post test design was used. In an effort to maximize the validity of the responses, the Parent Version of the Child Depression Inventory CDI Kovacs, was also distributed by the program director and completed by the caregivers prior to the onset of the program. A parent or guardian completed an informed consent form to participate in this study.
The post-test was administered one month after the completion of the bereavement group. The caregivers were asked to complete it and return it in a provided agency addressed and postage supplied envelope. To conduct the qualitative component of the assessment, we met with nine out of 14 randomly chosen families during a two-month period.
These scheduled interviews took place at the family residences in various towns and villages in central Pennsylvania. Questions that we developed were asked of nine caregivers and 13 children.
Answers that were both written and recorded audibly were gathered in an attempt to gather anecdotal information regarding specific aspects of the bereavement program. We analyzed these responses for content and themes. Both the grief reaction scale and depression inventory produced positive results when comparing the pre- and post- responses.
0コメント