Events

Loading Events

All Events

  • This event has passed.

Endless Summer School: Disentanglement: Domain Adaptation and Fairness

April 21 @ 10:00 am - 12:00 pm

Register

Available to Vector Institute sponsors only.

Disentanglement aims to develop structured representations, with different relevant factors of variation (e.g., age, gender, financial status, etc.) represented separately. This can facilitate transfer of representations between domains (e.g., data from different hospitals, or different groups of clients for a bank), and can improve generalization by eliminating “nuisance” information that is not relevant to the task. Disentanglement is also crucial for fairness—ensuring that a system does not disadvantage certain individuals or groups. In many application areas (including insurance, healthcare, advertising, and banking), we often want to obtain representations that are invariant with respect to a sensitive attribute (for example, we do not want to classify an individual as low or high credit risk based on their age).

SPEAKERS:

Elliot Creager

Elliot Creager, PhD Candidate – University of Toronto

Learning fair and disentangled representations

Abstract: We consider the problem of learning representations that achieve group and subgroup fairness with respect to multiple sensitive attributes. Taking inspiration from the disentangled representation learning literature, we propose an algorithm for learning compact representations of datasets that are useful for reconstruction and prediction, but are also “flexibly fair”, meaning they can be easily modified at test time to achieve subgroup demographic parity with respect to multiple sensitive attributes and their conjunctions. Time permitting, we will discuss practical issues with training disentanglement learners in realistic settings where the underlying factors of variation are correlated in the training data.

Bio: Elliot is a PhD Candidate at the University of Toronto and the Vector Institute, where he is supervised by Richard Zemel. He works on a variety of topics within machine learning, especially in the areas of algorithmic bias and representation learning. He was previously an intern and student researcher at Google Brain in Toronto.
Twitter: @elliot_creager

David Madras

David Madras, PhD Student – University of Toronto/Vector Institute

Fairness and Robustness in Invariant Learning: A Case Study in Toxicity Classification

Abstract: Robustness is of central importance in machine learning and has given rise to the fields of domain generalization and invariant learning, which are concerned with improving performance on a test distribution distinct from but related to the training distribution. In light of recent work suggesting an intimate connection between fairness and robustness, we investigate whether algorithms from robust ML can be used to improve the fairness of classifiers that are trained on biased data and tested on unbiased data. We apply Invariant Risk Minimization (IRM), a domain generalization algorithm that employs a causal discovery inspired method to find robust predictors, to the task of fairly predicting the toxicity of internet comments. We show that IRM achieves better out-of-distribution accuracy and fairness than Empirical Risk Minimization (ERM) methods, and analyze both the difficulties that arise when applying IRM in practice and the conditions under which IRM will likely be effective in this scenario. We hope that this work will inspire further studies of how robust machine learning methods relate to algorithmic fairness.

Bio: David is a PhD student at the University of Toronto and the Vector Institute, supervised by Rich Zemel. His research focuses on improving machine learning systems for high-stakes decision-making systems. He is interested in mitigating unfairness and discrimination in machine learning systems, improving their robustness under distribution shift, and more broadly understanding the role of automated tools in larger decision-making processes.

 

photo of Jake Snell

Jake Snell

Jake Snell, Postdoctoral Fellow – University of Toronto/Vector Institute

Learning Latent Subspaces in Variational Autoencoders

Deep generative models are able to learn unsupervised latent representations of data yet are often difficult to interpret or control. In this talk, we consider the problem of learning representations correlated to specific labels in a dataset. We propose a generative model based on the variational autoencoder (VAE) which we show is capable of extracting features correlated to binary labels in the data and structuring it in a latent subspace which is easy to interpret. Our model, the Conditional Subspace VAE (CSVAE), uses mutual information minimization to learn a low-dimensional latent subspace associated with each label that can easily be inspected and independently manipulated. We demonstrate the utility of the learned representations for attribute manipulation tasks on both the Toronto Face and CelebA datasets. This is joint work published in NeurIPS 2018 with Jack Klys and Richard Zemel.

Bio: Jake Snell is a postdoctoral fellow at the University of Toronto working with Richard Zemel. He completed his Ph.D. in February 2021 at the University of Toronto and Vector Institute also under the supervision of Richard Zemel. His work focuses on building deep learning algorithms that are able to transfer to novel environments with limited data. His recent research interests include Bayesian nonparametrics and uncertainty representation in deep models.

 

Photo of Paul Vicol

Paul Vicol

Paul Vicol, PhD Student – University of Toronto

An Introduction to Disentanglement

This talk will give an overview of disentanglement. We will first cover foundations in information-theory, useful for understanding a range of approaches to disentanglement based on minimizing mutual information between latent subspaces. We will cover Independent Subspace Analysis (ISA), as well as modern approaches that learn latent subspaces using adversarial factorization. We will discuss several applications of these factorization methods, in particular focusing on invariant representation learning for domain adaptation. The last part of the talk will discuss disentangled representation learning from data with correlated attributes.

Bio: Paul Vicol is a PhD student at the University of Toronto, supervised by Roger Grosse. His research focuses on understanding neural network training dynamics, in particular on bilevel optimization with applications to hyperparameter optimization. He is also interested in disentanglement and its applications to improving robustness to distribution shifts, including domain shift. He was an instructor for the Machine Learning Course at the Vector Institute in the Fall 2020 semester.

 

Moderated by:

photo of Eleni Triantafillou

Eleni Triantafillou

Eleni Triantafillou, PhD Candidate – University of Toronto/Vector Institute

 

 

 

 

 

 

 

Register

Available to Vector Institute sponsors only.

Virtual

Organizer

Vector Institute Professional Development
Scroll to Top