Daniel Roy

Faculty Member

Associate Professor, Department of Statistical Sciences, Faculty of Arts & Science, University of Toronto

Associate Professor, Department of Computer and Mathematical Sciences, University of Toronto Scarborough

Canada CIFAR Artificial Intelligence Chair

Daniel’s research blends computer science, statistics and probability theory; he studies “probabilistic programming” and develop computational perspectives on fundamental ideas in probability theory and statistics. Daniel is particularly interested in: representation theorems that connect computability, complexity, and probabilistic structures; stochastic processes, the use of recursion to define stochastic processes, and applications to nonparametric Bayesian statistics; and the complexity of probabilistic and statistical inference, especially in the context of probabilistic programming. Ultimately, Daniel is motivated by the long term goal of making lasting contributions to our understanding of complex adaptive systems and especially Artificial Intelligence.

Research Interests

  • Foundations of machine learning
  • Statistical learning theory
  • Neural networks
  • Probabilistic programming
  • Bayesian-frequentist interface
  • PAC-Bayes
  • Computability

Publications

Data-dependent PAC-Bayes priors via differential privacy

G. K. Dziugaite, and D. M. Roy

Advances in Neural Information Processing Systems 31 2018

On the computability of conditional probability

N. L. Ackerman, C. E. Freer, and D. M. Roy

Journal of the ACM 2019 66(3)

Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors

G. K. Dziugaite, and D. M. Roy

Proceedings of the 35th International Conference on Machine Learning (ICML) 2018

An estimator for the tail-index of graphex processes

Z. Naulet, E. Sharma, V. Veitch, and D. M. Roy

2017

Sampling and estimation for (sparse) exchangeable graphs

V. Veitch, and D. M. Roy

Ann. Statist. 2019 47(6):3274--3299

On extended admissible procedures and their nonstandard Bayes risk

Haosui Duanmu and Daniel M Roy

2021

Towards a Unified Information-Theoretic Framework for Generalization

Mahdi Haghifam and Gintare Karolina Dziugaite and Shay Moran and Dan Roy

2021

The future is log-Gaussian: ResNets and their infinite-depth-and-width limit at initialization

Mufan Li and Mihai Nica and Dan Roy

2021

Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers

Jeffrey Negrea and Blair Bilodeau and Nicolò Campolongo and Francesco Orabona and Dan Roy

2021

Methods and Analysis of The First Competition in Predicting Generalization of Deep Learning

Yiding Jiang and Parth Natekar and Manik Sharma and Sumukh K Aithal and Dhruva Kashyap and Natarajan Subramanyam and Carlos Lassance and Daniel M Roy and Gintare Karolina Dziugaite and Suriya Gunasekar and Isabelle Guyon and Pierre Foret and Scott Yak and Hossein Mobahi and Behnam Neyshabur and Samy Bengio

2021