From the 1/29/2021 newsletter
Director’s Corner
Educationally Sensitive Patient Outcomes (ESPOs): The Holy Grail for Transformation of Medical Education Research
Adina Kalet, MD MPH
In this Director’s Corner, Dr. Kalet introduces our community to ESPOs and reports on the Kern Institute’s first Invitational International Conference on Medical Education and Patient Outcomes …
Medical Education Research is a young science. When I started out, this work was mostly done by front line (and very busy) medical educators working with a few learning-scientist colleagues. We studied how the structure and process of learning – and individual characteristics of learners and instructional designs – affected individual student learning outcomes. Most studies were small, single institution, and predominately cross-sectional projects that had no comparisons or controls; not surprisingly, our literature was roundly criticized for a lack of rigor and for focusing on questions that had “marginal significance for actual practice”(Ouch!). In retrospect, these criticisms were correct and, by the way, who wants to do “marginally significant” work? We knew we could do better.
Many experts in the field vehemently pushed back, though, insisting that doing better was not possible, that there wasn’t enough funding, that studying a physician’s impact on health was too complex, and that rigorous studies would take the “art” out of the practice. I disagreed with these experts and, as you know, I love a good challenge.
About ten years ago, my colleagues and I wrote a paper calling on medical education researchers to step up our game. We argued that it was possible to identify learnable, teachable, and measurable “intermediate outcomes” of medical education that were directly linked to good outcomes for patients and populations. We called these Educationally Sensitive Patient Outcomes (ESPOs) and proposed three measurable parameters: Patient Activation, Microsystem Activation, and Health Literacy (Kalet, 2010, Yin 2015). We then set to work to systematically test the theory. It was slow going.
When you publish a paper, you are joining a conversation. You take turns, building on each other’s work, critiquing and debating. If you are lucky this leads to “cold calls,” a cup of coffee at professional meeting and, before you know it, you have friends all over the world who share your world view!
After a decade of ESPOs conversations, the Kern Institute sponsored the first, of hopefully many, Invitational Medical Education and Patient Outcomes Conferences this past Monday. Zoom enabled fifteen of us, representing three distinct research teams, three countries (US, Canada and the Netherlands) and nine institutions including the Medical College of Wisconsin, to meet for three hours to share our work, cross-fertilize ideas, and seek opportunities to build a “collaboratory.”
Here are the stories these folks shared.
Resident Sensitive Quality Measures (RSQMs)
Can we measure how well resident physicians provide care to children in the Pediatric Emergency room using data from the electronic medical record (EHR)? Daniel Schumacher, Pediatric Emergency Medicine Doctor at Cincinnati Children’s Hospital, and his team thought it was worth trying. After working with residents and supervising attendings to identify what data in the EHR actually reflect the resident’s contribution to patient care in three specific clinical situations (asthma, bronchiolitis, and closed head injury), he conducted a series of elegant studies to define these RSQMs. MCW’s own Pediatric Emergency Medicine physician, Abigail Schuh and I, are on the team funded by the National Board of Medical Examiners to “validate” his RSQM model using actual clinical data from MCW and NYU (Smirnova, 2019). The team’s leading data analyst, Saad Chahine, PhD, Associate Professor of Measurement and Assessment at the Faculty of Education, Queen’s University, Ontario, presented early findings. This RSQM model is looking promising. It is likely to generate compelling and motivating real patient outcome data that can be fed back to our residents and program directors, thus ensuring these pediatricians will be ready for independent practice.
Measuring Resident Operative Performance
Not surprisingly, some near-graduate surgery residents are not yet ready for unsupervised practice. Surgeon Brian C. George, MD at the University of Michigan and his team, including data scientist Andrew Krumm, PhD, Assistant Professor of Learning Health Sciences, have been studying the value of a just-in-time operative performance assessment collected frictionlessly using a smart phone-based software application (SIMPL). Brian runs the Center for Surgical Training and Research and serves as the Executive Director of the Society for Improving Medical Professional Learning (SIMPL), an international collaborative of 129 surgery training programs (Williams, 2017). Through this network, he has collected huge numbers of directly observed measures of resident performance during procedures that surgeons agree are important –appendectomy, inguinal hernia, cholecystectomy, colectomy among others – and can link these measures to Medicare insurance claims data, assessing for complications and outcomes. With their evolving data analytic sophistication, predictions can be made that help us better educate proceduralists to master their craft.
Databased for Research in Education in Academic Medicine (DREAM)
In 1948, researchers enrolled 5209 adults from Framingham MA in a heart disease risk study and have been following them and their offspring ever since. Almost everything we know about preventing heart attacks and strokes has emerged from this Framingham Heart Study. As Internists, Dr. Sandra Zabar, the 2020 AAMC Abraham Flexner award winner (I am boasting for my friend) and I were inspired by the study team’s “stick-to-it-iv-ness”! Starting in 2004, The Program for Medical Education Innovations and Research (PrMEIR) at NYU School of Medicine started our own “Framingham-like study” of medical education. Every year, our team seeks medical students’ permission to collect all their admissions, assessment, and survey data from entry to medical school through residency training and into practice. Over the years, most agree (~85%), and almost 3,000 students and residents have been enrolled. So far, this Databased for Research in Education in Academic Medicine (DREAM) has enabled over seventy-five studies (Gillespie, 2016). Colleen Gillespie, PhD, Associate Professor and Director of Education Quality for the Institute for Innovation in Medical Education at NYU, talked about how this longitudinal data can be used to build individual and aggregate “learning curves,” showing how, for instance, clinical communication skills develop over the course of medical school and residency, are predicated by admissions data, influenced by curriculum, and related to the outcomes among the patients these physicians care for early in practice. I’m not sure we will be doing this for the next fifty-six years ourselves but, with some luck, others will.
Why this is important
These types of studies carry implications far beyond the walls of academic medicine. As we work to transform medical education and help nurture the development of character-driven practicing physicians, we must study the impact of educational innovations and make certain we are studying outcomes that matter.
This work carries societal implications. Currently, US taxpayers invest well over $15 Billion each year in resident education via Direct Graduate Medical Education (DGME) and Indirect Medical Education (IME) support, primarily through Medicare and the Department of Veterans Affairs. We need to understand how best to maximize the country’s return on this investment.
This work carries personal implications, as well. Students take on enormous personal debt and invest the prime years of their lives to pursue their careers. Are our educational interventions effective in helping them take control of their education and making certain they are ready, healthy, and able to safely enter practice?
Next Steps
As you can imagine, the conversation was lively and could have gone on for a long time if not for Zoom fatigue. The data scientists (including Kern’s own Tavinder Ark, PhD) communed around the potential to aggregate various approaches to data analysis and how recursive systems could feed data back to learners and educational leaders, enabling real, virtuous cycles of learning. The medical educators talked about the value of knowing what really matters to patients, both in terms of their health outcomes and their experiences of care. And, of course, we considered the unintentional downside of using clinical and insurance claims data to measure the “quality” of our novice physicians. While the risks are real, the benefits to patients and learners far outweigh the risks if done with a growth mindset, with careful deliberation, and within ethical guardrails.
We resolved to meet as a group at least one more time and then consider conducting a symposium opened up to the larger community. The Kern Institute will continue to convene small and large groups of medical educators and scholars to “lift” the work and identify promising avenues for transformation. Stay tuned!
For further reading:
Kalet, AL, Gillespie, CC, Schwartz, MD, Holmboe, ES, Ark, TK, Jay, M, ... & Gourevitch, MN (2010). New measures to establish the evidence base for medical education: Identifying educationally sensitive patient outcomes. Academic Medicine, 85(5), 844-851.
Yin, HS, Jay, M, Maness, L, Zabar, S, & Kalet, A (2015). Health literacy: An educationally sensitive patient outcome. Journal of General Internal Medicine, 30(9), 1363-1368.
Smirnova, A, Sebok-Syer, SS, Chahine, S, Kalet, AL, Tamblyn, R, Lombarts, KM, ... & Schumacher, DJ (2019). Defining and adopting clinical performance measures in graduate medical education: Where are we now and where are we going? Academic Medicine, 94(5), 671-677.
Williams, RG, George, BC, Meyerson, SL, Bohnen, JD, Dunnington, GL, Schuller, MC, ... & Collaborative, S (2017). What factors influence attending surgeon decisions about resident autonomy in the operating room? Surgery, 162(6), 1314-1319.
Gillespie, C, Zabar, S, Altshuler, L, Fox, J, Pusic, M, Xu, J, & Kalet, A (2016). The Research on Medical Education Outcomes (ROMEO) Registry: Addressing ethical and practical challenges of using “bigger,” longitudinal educational data. Academic Medicine, 91(5), 690-695.
Adina Kalet, MD MPH is the Director of the Robert D. and Patricia E. Kern Institute for the Transformation of Medical Education and holder of the Stephen and Shelagh Roell Endowed Chair at the Medical College of Wisconsin.