BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Vector Institute for Artificial Intelligence - ECPv5.14.0.4//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Vector Institute for Artificial Intelligence
X-ORIGINAL-URL:https://vectorinstitute.ai
X-WR-CALDESC:Events for Vector Institute for Artificial Intelligence
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20210314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20211107T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20210421T100000
DTEND;TZID=America/New_York:20210421T120000
DTSTAMP:20220522T163815
CREATED:20210303T151924Z
LAST-MODIFIED:20210413T170441Z
UID:9469-1618999200-1619006400@vectorinstitute.ai
SUMMARY:Endless Summer School: Disentanglement: Domain Adaptation and Fairness
DESCRIPTION:Register\nAvailable to Vector Institute sponsors only. \nDisentanglement aims to develop structured representations\, with different relevant factors of variation (e.g.\, age\, gender\, financial status\, etc.) represented separately. This can facilitate transfer of representations between domains (e.g.\, data from different hospitals\, or different groups of clients for a bank)\, and can improve generalization by eliminating “nuisance” information that is not relevant to the task. Disentanglement is also crucial for fairness—ensuring that a system does not disadvantage certain individuals or groups. In many application areas (including insurance\, healthcare\, advertising\, and banking)\, we often want to obtain representations that are invariant with respect to a sensitive attribute (for example\, we do not want to classify an individual as low or high credit risk based on their age). \nSPEAKERS:\nElliot Creager \nElliot Creager\, PhD Candidate – University of Toronto \nLearning fair and disentangled representations \nAbstract: We consider the problem of learning representations that achieve group and subgroup fairness with respect to multiple sensitive attributes. Taking inspiration from the disentangled representation learning literature\, we propose an algorithm for learning compact representations of datasets that are useful for reconstruction and prediction\, but are also “flexibly fair”\, meaning they can be easily modified at test time to achieve subgroup demographic parity with respect to multiple sensitive attributes and their conjunctions. Time permitting\, we will discuss practical issues with training disentanglement learners in realistic settings where the underlying factors of variation are correlated in the training data. \nBio: Elliot is a PhD Candidate at the University of Toronto and the Vector Institute\, where he is supervised by Richard Zemel. He works on a variety of topics within machine learning\, especially in the areas of algorithmic bias and representation learning. He was previously an intern and student researcher at Google Brain in Toronto.\nTwitter: @elliot_creager \nDavid Madras \nDavid Madras\, PhD Student – University of Toronto/Vector Institute \nFairness and Robustness in Invariant Learning: A Case Study in Toxicity Classification \nAbstract: Robustness is of central importance in machine learning and has given rise to the fields of domain generalization and invariant learning\, which are concerned with improving performance on a test distribution distinct from but related to the training distribution. In light of recent work suggesting an intimate connection between fairness and robustness\, we investigate whether algorithms from robust ML can be used to improve the fairness of classifiers that are trained on biased data and tested on unbiased data. We apply Invariant Risk Minimization (IRM)\, a domain generalization algorithm that employs a causal discovery inspired method to find robust predictors\, to the task of fairly predicting the toxicity of internet comments. We show that IRM achieves better out-of-distribution accuracy and fairness than Empirical Risk Minimization (ERM) methods\, and analyze both the difficulties that arise when applying IRM in practice and the conditions under which IRM will likely be effective in this scenario. We hope that this work will inspire further studies of how robust machine learning methods relate to algorithmic fairness. \nBio: David is a PhD student at the University of Toronto and the Vector Institute\, supervised by Rich Zemel. His research focuses on improving machine learning systems for high-stakes decision-making systems. He is interested in mitigating unfairness and discrimination in machine learning systems\, improving their robustness under distribution shift\, and more broadly understanding the role of automated tools in larger decision-making processes. \n \nJake Snell \nJake Snell\, Postdoctoral Fellow – University of Toronto/Vector Institute \nLearning Latent Subspaces in Variational Autoencoders \nDeep generative models are able to learn unsupervised latent representations of data yet are often difficult to interpret or control. In this talk\, we consider the problem of learning representations correlated to specific labels in a dataset. We propose a generative model based on the variational autoencoder (VAE) which we show is capable of extracting features correlated to binary labels in the data and structuring it in a latent subspace which is easy to interpret. Our model\, the Conditional Subspace VAE (CSVAE)\, uses mutual information minimization to learn a low-dimensional latent subspace associated with each label that can easily be inspected and independently manipulated. We demonstrate the utility of the learned representations for attribute manipulation tasks on both the Toronto Face and CelebA datasets. This is joint work published in NeurIPS 2018 with Jack Klys and Richard Zemel. \nBio: Jake Snell is a postdoctoral fellow at the University of Toronto working with Richard Zemel. He completed his Ph.D. in February 2021 at the University of Toronto and Vector Institute also under the supervision of Richard Zemel. His work focuses on building deep learning algorithms that are able to transfer to novel environments with limited data. His recent research interests include Bayesian nonparametrics and uncertainty representation in deep models. \n \nPaul Vicol \nPaul Vicol\, PhD Student – University of Toronto \nAn Introduction to Disentanglement \nThis talk will give an overview of disentanglement. We will first cover foundations in information-theory\, useful for understanding a range of approaches to disentanglement based on minimizing mutual information between latent subspaces. We will cover Independent Subspace Analysis (ISA)\, as well as modern approaches that learn latent subspaces using adversarial factorization. We will discuss several applications of these factorization methods\, in particular focusing on invariant representation learning for domain adaptation. The last part of the talk will discuss disentangled representation learning from data with correlated attributes. \nBio: Paul Vicol is a PhD student at the University of Toronto\, supervised by Roger Grosse. His research focuses on understanding neural network training dynamics\, in particular on bilevel optimization with applications to hyperparameter optimization. He is also interested in disentanglement and its applications to improving robustness to distribution shifts\, including domain shift. He was an instructor for the Machine Learning Course at the Vector Institute in the Fall 2020 semester. \n \nModerated by:\nEleni Triantafillou \nEleni Triantafillou\, PhD Candidate – University of Toronto/Vector Institute \n \n \n \n \n \n \n \nRegister\nAvailable to Vector Institute sponsors only.
URL:https://vectorinstitute.ai/event/endless-summer-school-disentanglement-domain-adaptation-and-fairness/
LOCATION:Virtual
CATEGORIES:Professional Development
ATTACH;FMTTYPE=image/jpeg:https://vectorinstitute.ai/wp-content/uploads/2021/01/03_uniligual_whitecolour_vertical-min.jpg
END:VEVENT
END:VCALENDAR