Home > Contents >  Companion Items

Info and Chapters
Decision Analysis for Healthcare Managers
Farrokh Alemi, PhD
David H. Gustafson, PhD

Chapter 10: Program Evaluation
1... 2... 3...

Companion Items

Learning Tools
Download slides on evaluating programs with decision analysis
Listen to a narrated presentation on evaluating program outcomes
Listen to a narrated presentation on how satisfaction surveys can measure program outcomes
Listen to a narrated presentation on the use of time to dissatisfied customers as a method of program evaluation
Listen to a narrated presentation on evaluating programs with decision analysis
Websites of Interest
List of articles on program evaluations using decision analysis
List of studies on the controversial utility method of measuring quality-adjusted life years
Discussion of methods of program evaluation by Carter McNamara, Ph.D.
Tools and case studies for conduction program evaluations
Evaluation cookbook from the Learning Technology Dissemination Initiative includes checklists, concept maps, confidence logs, cost-effectiveness, design experiments, ethnography, focus groups, interviews, nominal group technique, pre- and post-testing, questionnaires, split-screen video, supplemental observation, system log data, and trials
Details on evaluating collaboratives by the University of Wisconsin
A manager’s guide to evaluation by the Administration on Children, Youth, and Families
An evaluation resource library from the National Science Foundation
A handbook on evaluation
“Taking Stock: A Practical Guide to Evaluating Your Own Programs” by Horizon Research, Inc.
List of articles on Medicare and cost-effective analysis
Framework for program evaluation from the Centers for Disease Control and Prevention (CDC) includes six steps in program evaluation: (1) engage stakeholders, (2) describe the program, (3) focus the evaluation design, (4) gather credible evidence, (5) justify conclusions, and (6) ensure use and share lessons learned.
Self-study guides for program evaluation from the CDC
Workbook on program evaluation from the CDC
Additional Readings
Alemi, F., M. R. Haack, and S. Nemes. 2001. “Continuous Improvement Evaluation: A Framework for Multisite Evaluation Studies.” Journal of Healthcare Quality 23 (3): 26–33. Continuous quality improvement is another way of conducting program evaluation. It defers from decision-analytic program evaluations in several ways, including the idea that the program itself must remain constant during the evaluation. With continuous quality improvement, programs are improved as data are collected. In contrast, at least theoretically, data are collected and analyzed before any program changes are attempted in most program evaluations.
Carpinello, S. E., D. L. Newman, and L. L. Jatulis. 1992. “Health Decision Makers’ Perceptions of Program Evaluation: Relationship to Purpose and Information Needs.” Evaluation and the Health Professions 15 (4):405–19. What do decision makers say they need in program evaluations? Many see three types of evaluations: a formative evaluation that helps improve the program, a summative evaluation that helps decide if a program should be continued, and a pseudo evaluation where programs are evaluated to satisfy some external need for documentation of the effects of the program.
Neumann, P. J., A. B. Rosen, and M. C. Weinstein. 2005. “Medicare and Cost-Effectiveness Analysis.” New England Journal of Medicine 353 (14): 1516–22. The Medicare program does not explicitly conduct program evaluations to see which interventions it should reimburse. These authors discuss the advantages of explicit evaluation of treatment protocols covered in Medicare.
Weinberger, M., E. Z. Oddone, W. G. Henderson, D. M. Smith, J. Huey, A. Giobbie-Hurder, and J. R. Feussner. 2001. “Multisite Randomized Controlled Trials in Health Services Research: Scientific Challenges and Operational Issues.”