ICRE recently connected with our plenary lecturer David Cook, Professor of Medicine and Medical Education in the Mayo Clinic College of Medicine and Director of the Office of Education Research, to get some more insight into outcomes and residency education.
You recently co-published a paper titled, Reconsidering the Focus on “Outcomes Research” in Medical Education: A Cautionary Note. It identifies some of the potential limitations of outcomes research in medical education. Could you expand?
Yes, we identify five potential limitations in the article:
- Dilution: The impact of educational interventions becomes progressively attenuated as it filters through other health care providers and systems.
- Inadequate sample size: Many studies of educational interventions do not enroll enough students or trainees. Consider how many physicians (let alone medical students) investigators would need to enroll in order to study the patient outcomes effect of teaching physicians about the benefits of metoprolol succinate in heart failure, ramipril in intermediate-risk patients, or clopidogrel in stroke prevention—benefits demonstrated in very large clinical trials. Because the effect of the educational intervention— even if successful—would be diluted, and the measurements would be imperfect, such a study would either need a very large sample or an intervention with a huge impact (large effect size). Anything less would likely result in non-statistically significant findings.
- Failure to establish a causal link: Educators might expect a focus on patient outcomes to improve study rigor. Yet studies using patient outcomes often suffer from threats to both internal study validity (the absence of bias in the study findings) and external study validity (the meaningfulness of the findings to others). Such validity threats limit the conclusions we draw.
- Potentially biased outcome selection: Many studies thus far have selected the topic of study based on a readily accessible (easily measured) outcome. This may be fine for initial studies to show “proof of concept”, but isn’t a good long-term strategy. As the field moves beyond proof of concept, the continued selection of easy-to-measure outcomes (topics) at the exclusion of more difficult but equally important outcomes (topics) risks unjustifiable bias. Those engaged in patient outcomes research must ensure that the topics they study adequately reflect the entire curriculum.
- Teaching to the test: Too much attention to patient outcomes could lead curriculum designers to teach only those processes that unambiguously enhance patient care. Although seemingly sensible, this approach suffers from at least two shortcomings. First, clear evidence informs only a fraction of clinicians’ diagnostic and therapeutic decisions. Focusing primarily on practices with defined standards will mean that we don’t teach other potentially important topics. Second, focusing on evidence-based algorithmic approaches to management could backfire if learners fail to learn the principles that underlie such actions.
How can educators facilitate a proper balance between learner-centered and patient-centered assessments?
An excessive focus on patient-oriented outcomes will at best distract the education community from important research using other outcomes, and at worst it could adversely affect some aspects of health professions education. Research focused on patient-level outcomes is and will remain essential to evaluate medical education activities, but it should not be pursued at the expense of research using other outcomes. In our article we offer six pearls to guide the selection and analysis of outcomes and instruments in medical education research.
- First, rather than starting a research project by identifying a measure or tool (e.g., “hemoglobin A1c” or “the patient record”) and then designing the investigation around it, researchers should first clarify the study objective and conceptual framework, and then work forward from there to select the most relevant outcome and measurement method.
- Second, I’ve noticed that some investigators use the same words to mean different things. It would help to promote mutual understanding if educators remember the distinction between skills (provider actions in an artificial test setting), behaviors (provider actions with real patients, such as ordering tests, prescribing, procedural time, or procedural technique), and patient effects (Kirkpatrick’s level 4 “results”: the actual impact on patients, such as patient satisfaction, patient compliance, symptom control, complications, or test results).
- Third, researchers need to focus on establishing links between patient outcomes and other more accessible outcomes (such as skills in a test setting).
- Fourth, investigators should consider proceeding in a deliberately stepwise fashion as they test educational interventions: first assessing knowledge and skills, then behaviors, and finally patient outcomes.
- Fifth, investigators might consider selecting patient outcomes that result from the engagement of patients and the whole health care team (as proposed by Adina Kalet and colleagues).
- Finally, I’ve found that many studies make important statistical errors when analyzing the results of studies using patient outcomes. Whenever there is more than one patient outcome per trainee the statistical analyses must account for this “clustering” of patients.
Don’t miss ICRE 2013! Online registration is now open on the ICRE website, so book your attendance today for early bird rates.