Spotlight on Health at NeurIPS 2021 

December 1, 2021

Dec 1, 2021
By Ian Gormely

Using AI to work towards better whole-life health is one of the four pillars of Vector’s Three Year Strategic Plan.

One notable paper from Vector Faculty Members Quaid Morris and Marzyeh Ghassemi showcases a method of creating predictive checklists, common decision aids in clinical settings, from data instead of using domain expertise. The method proposed in “Learning Optimal Predictive Checklists,” co-authored with Haoran Zhang and Berk Ustun, can create checklists in hours instead of months and offers a concrete metric through which the checklists can be evaluated. 

Below are abstracts and simplified summaries for many of the accepted papers and workshops from Vector Faculty Members.

Read more about the work of Vector researchers at this year’s NeurIPS Conference here.

Health-related Conference Papers by Vector Faculty Members & Faculty Affiliates:

Characterizing Generalization under Out-of-Distribution Shifts in Deep Metric Learning

Timo Milbich, Karsten Roth, Samarth Sinha, Ludwig Schmidt, Marzyeh Ghassemi, Björn Ommer

Deep Metric Learning (DML) aims to learn representation spaces in which a predefined metric (s.a. euclidean distance) relates to the semantic similarity of the input data in a way that allows to cluster samples from unseen classes based on inherent similarity, even under semantic

Out-of-Distribution shifts. However, standard benchmarks used to evaluate the generalization capabilities of different DML methods use fixed train- and test-splits and thus fixed train-to-test shifts. But in practice, the shift at test time is not known a priori and thus, the default evaluation setting is insufficient to evaluate the practical usability of different DML methods. To address this, we propose a novel protocol to generate sequences of progressively harder semantic shifts for given train-test splits to evaluate the generalization performance of DML methods under more realistic scenarios with different train-to-test shifts. Following that, we provide a thorough evaluation of conceptual approaches to DML and their benefits or shortcomings across train-to-test shifts of varying hardness, investigate links to structural metrics as potential indicators for downstream generalization performance as well as introduce few-shot DML as a cheap remedy for consistently improved generalization under more severe OOD shifts.

Continuous Latent Process Flows

Ruizhi Deng, Marcus A. Brubaker, Greg Mori, Andreas M. Lehrmann

Partial observations of continuous time-series dynamics at arbitrary time stamps exist in many disciplines. Fitting this type of data using statistical models with continuous dynamics is not only promising at an intuitive level but also has practical benefits, including the ability to generate continuous trajectories and to perform inference on previously unseen time stamps. Despite exciting progress in this area, the existing models still face challenges in terms of their representational power and the quality of their variational approximations. We tackle these challenges with continuous latent process flows (CLPF), a principled architecture decoding continuous latent processes into continuous observable processes using a time-dependent normalizing flow driven by a stochastic differential equation. To optimize our model using maximum likelihood, we propose a novel piecewise construction of a variational posterior process and derive the corresponding variational lower bound using trajectory re-weighting. Our ablation studies demonstrate the effectiveness of our contributions in various inference tasks on irregular time grids. Comparisons to state-of-the-art baselines show our model’s favourable performance on both synthetic and real-world time-series data.

Grad2Task: Improved few-shot text classification using gradients for task representation

Jixuan Wang, Kuan-Chieh Wang, Frank Rudzicz, Michael Brudno

Pretraining Transformer-based language models on unlabeled text and then fine-tuning them on target tasks has achieved tremendous success on various NLP tasks. However, the fine-tuning stage still requires a large amount of labeled data to achieve good performance. In this work, we propose a meta-learning approach for few-shot text classification, where only a handful of examples are given for each class. During training, our model learns useful prior knowledge from a set of diverse but related tasks. During testing, our model uses the learned knowledge to better solve various downstream tasks in different domains. We use gradients as features to represent the task. Compared with fine-tuning and other meta-learning approaches, we demonstrate better performance on a diverse set of text classification tasks. Our work is an inaugural exploration of using gradient-based task representations for meta-learning.”

Learning Optimal Predictive Checklists.

Haoran Zhang, Quaid Morris, Berk Ustun, Maryzeh Ghassemi

Checklists are commonly used decision aids in the clinical setting. One reason why checklists are so effective is due to their simple form – they can be filled out in a couple of minutes, they do not require any specialized hardware to deploy (only a printed sheet), and they are easily verifiable unlike other black-box machine learning models. However, the vast majority of current checklists are created by panels of experts using domain expertise. In this work, we propose a method to create predictive checklists from *data*. Creating checklists from data allows us to have a measurable evaluation criteria (i.e. there is some concrete metric that we can use to evaluate checklists). It also allows for rapid model development – we can make checklists in a matter of hours, instead of needing to wait months for the panel of experts. Our method formulates checklist creation as an integer program which directly minimizes the error rate of the checklist. Crucially, our method also allows for the inclusion of customizable constraints (e.g. on checklist form, performance, or fairness), as well as yield insight into when a checklist is not an appropriate model for the particular task. We find that our method outperforms existing baseline methods, and present two case studies to demonstrate the practical utility of our method where 1) we train a checklist to predict mortality in ICU patients with group fairness constraints, and 2) we learn a short-form version of the PTSD Checklist for DSM-5 that is faster to complete while maintaining accuracy.

Medical Dead-ends and Learning to Identify High-risk States and Treatments

Mehdi Fatemi (Microsoft Research), Taylor W. Killian (University of Toronto/ Vector Institute), Jayakumar Subramanian (Adobe Research – India), Marzyeh Ghassemi (Massachusetts Institute of Technology)

Patient-clinician interactions are inherently sequential processes where treatment decisions are made and adapted based on an expert’s understanding of how a patient’s health evolves. While RL has been shown to be a powerful tool for learning optimal decision strategies–learning what to do–guarantees for finding these solutions depend on the ability to experiment with possible strategies to collect more data. This type of exploration is not possible in a healthcare setting making learning optimal strategies impossible. In this work, we propose to invert the RL paradigm in data-limited, safety-critical settings to investigate high-risk treatments as well as patient health states. We train the algorithm to identify treatments to avoid choosing so as to keep the patient from irrecoverably negative health outcomes, defined as a medical dead-end. We apply this approach (Dead-end Discovery — DeD) to a real-world clinical task using the MIMIC-III dataset, treating critically ill patients who have developed Sepsis. We establish the existence of dead-ends and demonstrate the utility of DeD, raising warnings that indicate when a patient or treatment embodies elevated or extreme risk of encountering a dead-end and thereby death.

Health-related workshops from Vector Faculty Members: 

Machine learning from ground truth: New medical imaging datasets for unsolved medical problems

Katy Haynes, Ziad Obermeyer, Emma Pierson, Marzyeh Ghassemi, Matthew Lungren, Sendhil Mullainathan, Matthew McDermott

This workshop will launch a new platform for open medical imaging datasets. Labeled with ground-truth outcomes curated around a set of unsolved medical problems, these data will deepen ways in which ML can contribute to health and raise a new set of technical challenges.

Related:

A man looks at a white board with red formulas on it
Insights
Trustworthy AI

How to safely implement AI systems

Keith Strier and Tony Gaffney speak on stage at the Remarkable 2024 conference.
Insights

Remarkable 2024 spotlights Canada’s flourishing ecosystem

Merck and Vector logos
News
Partnership

Merck Canada announces collaboration with Vector Institute