Research Interests
- Algorithms for deep learning and Bayesian learning
- Faster training, better generalization, better uncertainty measures, easier tuning
- AI alignment
Biography
Roger is an Assistant Professor of Computer Science at the University of Toronto, focusing on machine learning. Previously, He was a postdoc at Toronto, after having received a Ph.D. at MIT, studying under Bill Freeman and Josh Tenenbaum. Before that, Roger did his undergraduate degree in symbolic systems and MS in computer science at Stanford University. Roger is a co-creator of Metacademy, a web site which uses a dependency graph of concepts to help you formulate personalized learning plans for machine learning and related topics.
Highlights
- Connaught New Researcher Award
- Canada Research Chair in Probabilistic Inference and Deep Learning
Peer-reviewed papers
- Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, and Roger Grosse. Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches. ICLR 2018.
- Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding short-horizon bias in stochastic meta-optimization. ICLR 2018.
- Yuhuai Wu, Elman Mansimov, Shun Liao, Roger Grosse, and Jimmy Ba. Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation. NIPS 2017.
- Aidan Gomez, Mengye Ren, Raquel Urtasun, and Roger Grosse. The Reversible Residual Network: Backpropagation Without Storing Activations. NIPS 2017.
- Jacob Gardner, Chuan Guo, Kilian Weinberger, Roman Garnett, and Roger Grosse. Discovering and exploiting additive structure for Bayesian optimization. AISTATS 2017.
- Jimmy Ba, Roger Grosse, and James Martens. Distributed second-order optimization using Kronecker-factored approximations. ICLR 2017.
- Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger Grosse. On the quantitative analysis of decoder-based generative models.. ICLR 2017.