ICRE recently connected with Elaine Van Melle, PhD, Education Researcher at Queen’s University and CanMEDS Education Scientist at the Royal College of Physicians and Surgeons of Canada, to discuss how program directors and medical educators can better evaluate educational innovations in their training programs.
What are some of the common challenges program directors and medical educators encounter when trying to evaluating educational innovations?
The first challenge is to build in the evaluation up front. Typically, an evaluation is left until the end, well after the innovation has been created and implemented. But a solid evaluation relies upon a thorough needs assessment, a clear description of how the innovation works and well defined short, medium and long term goals. And so as soon as a medical educator thinks “we need to try something new” they should start thinking about preparing for the evaluation. It’s never too early to start!
A second challenge lies in creating a good question. “Does IT work?” is the question medical educators often start with. But what exactly is “IT”? As stated by my most favourite evaluation expert Michael Quinn Patton (2012) “Answering the IT question is deceptively challenging. Getting an answer requires first defining what it means for the program to work: that is, defining and measuring what desired outcomes and goals were achieved. The second part of unpacking the question is the IT itself: that is, what was the program or intervention? . . . The final and most difficult aspect of answering did IT work is connecting the implementation with outcomes.” (p. 179).
Patton goes on to present three fundamental evaluation questions:
- The implementation question: What happened in the program?
- The outcomes question: What resulted?
- The attribution question: Can what resulted be attributed to what was implemented?
These questions provide a good starting point for deciding on a meaningful and useful focus for an evaluation.
Reference: Patton MQ (2012). Essentials of Utilization-focused Evaluation. Thousand Oaks, CA: Sage Publications.For those not familiar, what is the Kirkpatrick Model?
The Kirkpatrick Model was created by Dr. Don Kirkpatrick in the 1950s as part of his PhD dissertation on the evaluation of the effectiveness of training for supervisors. The Model was originally situated in business training programs and is becoming a common conceptual framework for conducting evaluations in medical education.
The Kirkpatrick model is based on the following four levels of outcome evaluation with each level leading to the next:
Reference: Kirkpatrick, D. L. and Kirkpatrick J.D. (2006). Evaluating Training Programs (3rd ed.). San Francisco, CA: Berrett-Koehler Publishers
What are some of the limitations of outcome-focused evaluations?
“Outcome-focused evaluations” is a deceptive term since educational innovations can have many different types of outcomes. Generally, these outcomes have an associated time frame and can be categorized as short, medium and long term outcomes.
Short-term outcomes usually relate to an increase in knowledge or gain in skill or ability, medium term outcomes involve changes in behaviour such as improvements to clinical practice, and long term outcomes relate to impacts on patient care or health care systems as a result of the innovation. There are two major criticisms of outcomes focused evaluations.
First, if the program is not yet stabilized and still in the developmental phase, then focusing on outcomes may not be timely. It might be better to first ask the question “Was the innovation implemented as intended?” Creating a solid understanding of how the program works, paves the way for a much more meaningful evaluation of outcomes down the road.
Second, usually outcomes evaluation is equated with long term outcomes eg., examining the impact of the innovation on patient care or health care systems. By the time this happens however, the clinician will have had many different experiences. And so it is difficult to make a clear link between the innovation and long term outcomes.
Reference: Cook DA, West CP. (2013). Reconsidering the focus on “Outcomes Research” in medical education: A cautionary note. Academic Medicine 88(2): 1-6.How can educators move beyond this?
A logic model is a simple but powerful tool to help medical educators design, implement and evaluate an educational innovation. As illustrated, a logic model provides a framework for organizing the components of an innovation and in doing so, helps to determine the focus of an evaluation.
This topic will be examined in-depth at an ICRE pre-conference workshop on September 26, 2013. Register online today!
Very interesting! We normaly say this, but at the end, we leave the assessment as a final thing and we stay focused on the inovation without evaluate it.