Improving Clinical Decision-Making with Big Data

An Interview with Peter Szolovits, MIT CSAIL

Peter Szolovits MIT

Doctors, nurses and other healthcare professionals have always had to “read” and respond quickly to often-imperfect data under stressful circumstances. What has changed over time is the volume and types of data that can be digitized and put to work in clinical decision-making. This creates both new opportunities to improve and save patients’ lives—and new challenges for technologists.

At the ISTC, we’ve been privileged to work with Professor Peter Szolovits, head of the Clinical Decision Making Group at MIT CSAIL. Professor Szolovits is a pioneer and expert in the application of AI methods to problems of medical decision-making, natural language processing (NLP) to extract meaningful data from clinical narratives, and the design of information systems for health care institutions and patients.

We recently interviewed Professor Szolovits about how Big Data can improve clinical decision-making.

What’s been the focus of your ISTC-backed research?  

First, improving predictive modeling of outcomes from data based on past patient cases. For example: We want to help clinicians estimate the likelihood of survival over various durations of time, the need for certain kinds of interventions, and the opportunity to become less aggressive in care, which usually carries risks as well as benefits. We have also been paying a lot of attention to learning the most appropriate abstractions of the raw data to improve these kinds of predictions.

Second, natural language processing to “understand” the content of doctors’ and nurses’
notes. Much of the detailed medical content of medical records is in such notes, and that information tends not to be duplicated in existing tabular records. NLP is challenging, however, and medical notes are rife with ambiguities, abbreviations, telegraphic contraction of content, and so on.

Lastly, scaling up these kinds of analysis to very large data sets.

Medical data has the Big Data trifecta–volume, variety and velocity–and its application to clinical decision-making is literally a race against time. You and your colleagues and doctoral students have made a lot of progress in closing this gap with your research. Where are the big remaining challenges?

We have been good at creating techniques that at least partially address the analytical needs of clinical medicine. We are much less effective in figuring out how to inject these methods into the workflow of medical care so that we can actually see whether our decision-support applications make a real difference in how patients are treated and in improving their outcomes.

Since joining the MIT faculty in 1974 as a computer science professor–and your fateful meeting with a group of doctors at Tufts/New England Medical Center–your interest has focused on building computer-aided advisory programs that “could help all doctors work as well as the best.” What’s the state of the art today? What’s next?

The focus today is much less on trying to get all doctors to “work as well as the best,” but instead on reducing variance in their behavior.  It appears that making the least-skilled doctors average is much more important than making the average doctor the best. However, as I mentioned before, most sophisticated decision support still has little impact on actual practice. The things that have been deployed are usually simple rule-based aids that make sure one is warned if ordering two drugs that have a bad interaction, or if some lab values are trending toward potential disaster. These are useful, but less integrative of the wealth of data that are actually available to doctors and nurses in the course of patient care.

“The focus today is much less on trying to get all doctors to ‘work as well as the best,’ but instead on reducing variance in their behavior. It appears that making the least-skilled doctors average is much more important than making the average doctor the best.”

Machine learning, or ML, is a very hot topic today across all industries. How will ML help improve clinical decision-making and health care in the future?

We’re getting better at exploiting regularities in the world to be able to make more and more accurate predictions. The residual inaccuracies come less and less from weaknesses in our methods and more from the fact that in domains as complex as medicine, patients’ responses to care are in fact not nearly 100% predictable. To improve on that, biology and medicine will need to make further advances that uncover the mechanisms underlying why people get sick, how their diseases progress, how they respond to treatments, and so on. Unfortunately, most of our predictive models operate at the level of surface phenomena, so we get relatively little insight into these mechanisms from looking at the experience of large groups of patients.

What other evolving technologies—for example, the Internet of Things—will promote better biomedical informatics and clinical decision-making?

Better sensors and continuous monitoring should help, and we joke about the “walking
ICU,” where a patient leading his or her daily life will be as well instrumented by non-invasive tools as current ICU patients. However, the jury is still out on just how useful that information will be, though there is some promising anecdotal evidence that it can help in individual cases.  Unfortunately, IoT today is full of security holes, so until we do a much better job of assuring safety and confidentiality of the information being gathered—and especially any interventions being delivered—by this technology, it won’t be ready for prime time.

The MIMIC-II open dataset has played a prominent role in your group’s research over the years, as well as that of other researchers, including ISTC researchers. MIMIC-III is now available. What new opportunities does MIMIC-III open up for researchers?

MIMIC-III has about 50% more patient care episodes than its predecessor, and Roger
Mark’s group has done a good job of designing its structures to be used as they
incorporate further data not only from BIDMC but also from other institutions. The
more data we have, the more it is possible to identify patients in the historical
records who are very similar to a patient under current consideration. That in turn
makes it possible to predict possible outcomes for interventions more accurately.

 “The more data we have, the more it is possible to identify patients in the historical records who are very similar to a patient under current consideration. That in turn makes it possible to predict possible outcomes for interventions more accurately.

You were an early proponent of electronic health records and your early research pioneered approaches such as web-based access to EHRs—as well as aggregating data from multiple institutions to provide a longitudinal view and the idea of lifelong personal health care including patients’ control of their data. EHRs have had a spotty adoption curve, for a number of reasons. How can technology overcome remaining obstacles?

Technology by itself is clearly not going to be able to overcome these obstacles,
because I think most of them are institutional and policy-based. There is technical
progress toward standards for interoperability such as FHIR, which is being adopted by
the biggest EHR vendors. My colleagues at Children’s Hospital have done some
wonderful work on app-store-like applications that can make data from such platforms
available to clinicians and patients in meaningful ways, and organizations such as
the Health Record Banking Alliance have been pushing individual control over health
records.

Unfortunately, recent efforts such as Google Health and Microsoft Health
Vault, which adopted some of the goals I have been arguing for, were essentially
failures, probably because they were unable to integrate well into the clinical
workflows through which clinicians actually interact with patient data.  However, my
hope springs eternal!

*****

You can learn more about the work by Professor Szolovits, his colleagues and his students in these research papers;

Predicting Intervention Onset in the ICU with Switching State Space.Models.” Marzyeh Ghassemi, Mike Wu, Michael C. Hughes, Peter Szolovits, Finale Doshi-Velez.  To appear in Proceedings of the AMIA Summit on Clinical Research Informatics (CRI), 2017

“Uncovering Voice Misuse Using Symbolic Mismatch.” Marzyeh Ghassemi, Zeeshan Syed, Daryush D. Mehta, Jarrad H. Van Stan, Robert E. Hillman, John V. Guttag. MLHC 2016, Los Angeles, CA. Proceedings of the 1st Machine Learning for Healthcare Conference, pp. 239–252, 2016 JMLR Workshop and Conference Track Volume 56

“MIMIC-III, A Freely Accessible Critical Care Database.” Johnson AEW, Pollard TJ, Shen L, Lehman L, Feng M, Ghassemi M, Moody B, Szolovits P, Celi LA, and Mark RG. Scientific Data (2016). DOI: 10.1038/sdata.2016.35, May 2016

“A Multivariate Timeseries Modeling Approach to Severity of Illness Assessment and Forecasting in ICU with Sparse, Heterogeneous Clinical Data.” Marzyeh Ghassemi, Marco A.F. Pimentel, Tristan Naumann, Thomas Brennan, David A. Clifton, Peter Szolovits, Mengling Feng. AAAI 2015, Austin, TX, January 2015

“Scaling the PhysioNet WFDB Toolbox for MATLAB and Octave.” Naumann, T., and Silva, I. Computing in Cardiology Conference (CinC 2014). IEEE.

“Unfolding Physiological State: Mortality Modeling in Intensive Care Units.” Marzyeh Ghassemi, Tristan Naumann, Finale Doshi-Velz, Nicole Brimmer, Rohit Joshi, Anna Rumshisky, Peter Szolovits. KDD 2014, August 2014.  Read a short related blog post

 

This entry was posted in Big Data Applications, ISTC for Big Data Blog and tagged , , , , , , . Bookmark the permalink.

Leave A Reply

Your email address will not be published. Required fields are marked *


four + = 12