Hassan Ashtiani Headshot

Hassan Ashtiani

Faculty Affiliate

Assistant Professor, Department of Computing and Software, Faculty of Engineering, McMaster University

Hassan Ashtiani is an Associate Professor in the Department of Computing and Software at McMaster University. He obtained his Ph.D. in Computer Science from University of Waterloo. Before that, he received his master’s degree in AI and Robotics and his bachelor’s degree in Computer Engineering, both from University of Tehran. Broadly speaking, he works on foundations of unsupervised learning, differential privacy, robustness, and deep learning theory.

Hassan’s interests in unsupervised learning span distribution learning and generative modeling. He is known for introducing distribution compression schemes for analyzing the sample complexity of learning Gaussian mixtures and related distribution classes such as those parametrized by Sum-Product networks.

In the area of differential privacy, Hassan has designed algorithms for private statistical estimation (e.g., hypothesis selection) and private learning of high dimensional distributions (e.g., Gaussians and their mixtures). He is particularly interested in designing black-box reductions from private to non-private learning with minimal statistical and/or computational overhead.

In the area of test-time adversarial robustness, Hassan introduced the “tolerant” framework for adversarial learning, an attempt to address the gap between the theoretical and applied literature.

Hassan is interested in understanding the theoretical underpinnings of the success of deep learning methods. This includes obtaining modern generalization bounds for deep models, as well as defining novel notions of “distributional niceness” that enable deep models to perform much better than what worst-case analysis suggests. This direction captures scenarios such as unsupervised domain alignment and learning under distribution shift.

Research Interests

  •  High dimensional estimation and learning
  • Differential privacy
  • Deep learning theory
  • Robustness in machine learning
  • Unsupervised domain alignment

Highlights

  • Best paper award at Neurips 2018