Events

Loading Events

All Events

  • This event has passed.

Endless Summer School: NeurIPS Highlights

February 24, 2021 @ 10:00 am - 12:00 pm

Available to Vector Institute sponsors only.  

 

This session summarizes some of the the papers, workshops, and tutorials Vector researchers presented at the 34th annual conference on Neural Information Processing Systems (NeurIPS), the premier machine learning conference. 

Register

Featuring:

David Duvenaud PhotoDavid Duvenaud

Assistant Professor in both Computer Science and Statistics, University of Toronto; Co-founder, Invenia; Canada Research Chair in Generative Models;  Faculty Member, Vector Institute

Learning Differential Equations that are Fast to Solve

When we model physical systems, some models are easier to approximate and make predictions with than others. Sometimes different models will make almost exactly the same predictions, but one will be much easier to work with. We show how to encourage models to be easier to make predictions while still agreeing with the data almost as well. Specifically, we show how to do this in a general class of models of continuously-evolving systems called ordinary differential equations.

Roger Grosse

Roger Grosse

Assistant Professor, Department Computer Science, University of Toronto; Co-creator, Metacademy; Faculty Member, Vector Institute;
Canada CIFAR Artificial Intelligence Chair

Website | Google Scholar

Delta-STN: Efficient Bilevel Optimization of Neural Networks using Structured Response Jacobians Juhan Bae (University of Toronto/Vector Institute), Roger Grosse (University of Toronto/Vector Institute)
Neural net training involves a lot of hyperparameters, i.e. knobs that need to be tuned in order to achieve good performance. We developed an approach to automatically tuning hyperparameters online while a network is training (in contrast with most tuning methods, which require many training runs). The key is to learn the best-response Jacobian, which determines how the optimum of the training objective changes in response to small perturbations to the hyperparameters. This lets us approximately determine how the hyperparameters need to be changed to improve the generalization error.

Pascal PoupartPascal Poupart

Faculty Member; Vector Institute Canada CIFAR Artificial Intelligence Chair

Website

Learning Dynamic Belief Graphs to Generalize on Text-Based Games

Playing text-based games requires skills in processing natural language and sequential decision making. Achieving human-level performance on text-based games remains an open challenge, and prior research has largely relied on hand-crafted structured representations and heuristics. In this work, we describe a new technique to plan and generalize in text-based games using graph-structured representations learned end-to-end from raw text.

Moderated by

Sana Tonekaboni

Sana Tonekaboni

Sana Tonekaboni is a third-year Ph.D. student at the department of computer science at the University of Toronto. Her research focus is on machine learning for healthcare, and she works on problems like explainability and unsupervised representation learning for clinical time series. She was a previous health system impact fellow with the Canadian Institute of Health and Research (CIHR), and is currently an affiliate with the Vector Institute.

 

 

Register

Available to Vector Institute sponsors only.

Virtual

Organizer

Vector Institute Professional Development
Scroll to Top