ICU units are always busy, but during the COVID-19 pandemic demand for ICU services have been high. Data-driven decision-making is more important than ever because it can help guide therapy and inform decisions about staffing and triage support to the best care of patients in need.
There has been promising progress with machine learning (ML) in predicting clinical outcomes. However, ML models are often based on single-task learning where they only predict a specific adverse event such as organ dysfunction or life support intervention. It would be far more beneficial to train multi-task models that consider multiple competing risks and the interdependencies between organ systems for outcome prediction in realistic settings.
Google AI proposes a a multi-task learning (MTL) architecture called SeqSNR that better captures the complexity of realistic settings. The Sequential Sub-Network Routing (SeqSNR) is designed to use flexible parameter sharing and routing, which encourages cross-learning between tasks related in some way. Google researchers applied this framework for continuous adverse event prediction in an ICU setting with success because it has advantages over single-tasking or naïve multi-tasking especially when there are low training data scenarios.
In this study, the team used a freely available database of 36,498 adults’ MIMIC-III EHR dataset from 2001 to 2012 at Beth Israel Deaconess Medical Center. They mapped it adhering to FHIR (Fast Healthcare Interoperability Resource) and gathered vital signs to learn more about patients admitted for critical care treatments.
The model was tasked with predicting the onset of adverse events within 24–48 hours for every hour after a patient’s admission into the ICU. The chosen predictive factors included acute kidney injury (AKI), continuous renal replacement therapy (CRRT) dialysis, administration of vasopressors and inotropes, mechanical ventilation (MV), mortality, and remaining length of stay (LoS).
Multi-task learning provides an effective way to capture the interdependencies between organ systems and balance competing risks. In practice, jointly trained tasks often impair one another because of negative transfer–an effect that was mitigated by SeqSNR’s module subnetworks which automatically optimize how information is shared across multiple tasks.
SeqSNR is an architecture that uses a recurrent neural network (RNN) with deep embedding layers. The system’s modular design minimizes negative transfer by ensuring data irrelevant to specific task layers are filtered out before being sent on for processing.
SeqSNR has shown a significant improvement in performance across all scenarios relative to single-task and naïve multitasking. The best result was seen when there were few training labels, which means that SeqSNR had more room for improvements than it would have if the scenario required many lessons or different types of tasks with complex nuances.
This work explores the use of deep learning to predict EHRs on a set of canonical tasks. It is open-source, publicly available for download here, hoping that it will stimulate more research and development in this field inspired by clinical reasoning.