David Duvenaud is an associate professor in computer science and statistics at the University of Toronto, and holds a Canada Research Chair in generative models. He is also a founding member of the Vector Institute. His postdoc was at Harvard University, where he worked on hyperparameter optimization, variational inference, deep learning and automatic chemical design. He did his Ph.D at the University of Cambridge, studying Bayesian nonparametrics with Zoubin Ghahramani and Carl Rasmussen. David spent two summers on the machine vision team at Google Research, and also co-founded Invenia, an energy forecasting and trading company.
Associate Professor, Department of Statistical Sciences and Department of Computer Science, Faculty of Arts & Science, University of Toronto
Canada CIFAR Artificial Intelligence Chair
Co-founder, Invenia
Canada Research Chair in Generative Models
Research Interests
- Approximate inference
- Automatic model-building
- Model-based optimization
Publications
Getting to the Point. Index Sets and Parallelism-Preserving Autodiff for Pointful Array Programming
2021
Meta-Learning for Semi-Supervised Few-Shot Classification
International Conference on Learning Representations (ICLR) 2018
Stochastic Hyperparameter Optimization through Hypernetworks
2018
Isolating Sources of Disentanglement in Variational Autoencoders
Advances in Neural Information Processing Systems 31 2018
Noisy Natural Gradient as Variational Inference
Proceedings of the 35th International Conference on Machine Learning (ICML) 2018
Backpropagation through the Void: Optimizing control variates for black-box gradient estimation
International Conference on Learning Representations (ICLR) 2018
Automatic chemical design using a data-driven continuous representation of molecules
American Chemical Society Central Science 2018
Sticking the landing: Simple, lower-variance gradient estimators for variational inference
Advances in Neural Information Processing Systems (NIPS) 2017
Reinterpreting Importance-Weighted Autoencoders
International Conference on Learning Representations (ICLR) Workshop Track 2017