With a background in engineering, Murat has a keen interest in applying theory to solve real-world problems. His primary interest is in designing optimization algorithms for machine learning models. Using efficient algorithms, model training time can be reduced significantly, allowing researchers to efficiently test and select the best model for the problem at hand, be it recommender systems or image denoising. Murat completed his PhD in the Department of Statistics at Stanford University. He holds a Master’s in Computer Science from Stanford and Bachelor’s degrees in Electrical Engineering and Mathematics from Bogazici University in Turkey. Previously Murat was a postdoctoral researcher at Microsoft Research. Murat regularly publishes at the top-rated machine learning conference NIPS, and has journal papers in the Annals of Statistics and JMLR.
Assistant Professor, Department of Computer Science and Department of Statistical Sciences, Faculty of Arts & Science, University of Toronto
Canada CIFAR Artificial Intelligence Chair
Publications
Convergence rate of block-coordinate maximization Burer–Monteiro method for solving large SDPs
2021
An analysis of constant step size sgd in the non-convex regime: Asymptotic normality and bias
2021
Convergence rates of stochastic gradient descent under infinite noise variance
2021
Manipulating sgd with data ordering attacks
2021
Heavy tails in sgd and compressibility of overparametrized neural networks
2021
Fractal structure and generalization properties of stochastic optimization algorithms
2021
Understanding the Variance Collapse of SVGD in High Dimensions
2021
Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance
2022
On Empirical Risk Minimization with Dependent and Heavy-Tailed Data
2021
Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings
2022
Riemannian langevin algorithm for solving semidefinite programs
2023
Towards a complete analysis of langevin monte carlo: Beyond poincaré inequality
2023
An analysis of Transformed Unadjusted Langevin Algorithm for Heavy-tailed Sampling
2023
Gradient-based feature learning under structured data
2024
Optimal Excess Risk Bounds for Empirical Risk Minimization on -Norm Linear Regression
2024