Daniel’s research blends computer science, statistics and probability theory; he studies “probabilistic programming” and develop computational perspectives on fundamental ideas in probability theory and statistics. Daniel is particularly interested in: representation theorems that connect computability, complexity, and probabilistic structures; stochastic processes, the use of recursion to define stochastic processes, and applications to nonparametric Bayesian statistics; and the complexity of probabilistic and statistical inference, especially in the context of probabilistic programming. Ultimately, Daniel is motivated by the long term goal of making lasting contributions to our understanding of complex adaptive systems and especially Artificial Intelligence.
Associate Professor, Department of Statistical Sciences, Faculty of Arts & Science, University of Toronto
Associate Professor, Department of Computer and Mathematical Sciences, University of Toronto Scarborough
Canada CIFAR Artificial Intelligence Chair
Research Interests
- Foundations of machine learning
- Statistical learning theory
- Neural networks
- Probabilistic programming
- Bayesian-frequentist interface
- PAC-Bayes
- Computability
Publications
Data-dependent PAC-Bayes priors via differential privacy
Advances in Neural Information Processing Systems 31 2018
On the computability of conditional probability
Journal of the ACM 2019 66(3)
Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors
Proceedings of the 35th International Conference on Machine Learning (ICML) 2018
An estimator for the tail-index of graphex processes
2017
Sampling and estimation for (sparse) exchangeable graphs
Ann. Statist. 2019 47(6):3274--3299
On extended admissible procedures and their nonstandard Bayes risk
2021
Towards a Unified Information-Theoretic Framework for Generalization
2021
The future is log-Gaussian: ResNets and their infinite-depth-and-width limit at initialization
2021
Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers
2021
Methods and Analysis of The First Competition in Predicting Generalization of Deep Learning
2021