Daniel Roy

Faculty Member

Associate Professor, Department of Statistical Sciences, Faculty of Arts & Science, University of Toronto

Associate Professor, Department of Computer and Mathematical Sciences, University of Toronto Scarborough

Canada CIFAR Artificial Intelligence Chair

Roy’s research in deep learning spans theory and practice. His contributions range from his pioneering work on empirically grounded statistical theory for deep learning, to state-of-the-art algorithms for neural network compression and data-parallel training. His experimental work has shed light on various deep learning phenomena, including neural network training dynamics and linear mode connectivity, while his recent theoretical work introduces simple but accurate mathematical models for deep neural networks at initialization.

Beyond his contributions to deep learning, Roy has made significant advances to the mathematical and statistical underpinnings of AI. His dissertation on probabilistic programming languages and computable probability theory was recognized by an MIT Sprowls Award. Roy recently resolved several open problems in statistical decision theory posed over 70 years ago, by exploiting the properties of infinitesimal numbers to expand the set of allowable Bayesian priors. His latest work, focussing on robust and adaptive decision making, has been recognized by multiple oral presentations at leading conferences and best poster awards.

Research Interests

  • Foundations of Machine Learning
  • Deep Learning
  • Sequential Decision Making
  • Statistical Learning Theory
  • Probabilistic Programming
  • Bayesian-Frequentist Interface
  • PAC-Bayes

Publications