Hassan Ashtiani, Professor, McMaster University | Vector Institute Faculty Member
You want to share your medical data to help advance research, but you don’t want others to access your private information. The intuitive solution – simply removing your name and anonymizing data – has been tried repeatedly. Unfortunately, it doesn’t work. People can still find important details about you through patterns in the data, cross-referencing with other datasets, or statistical inference.
Hassan Ashtiani‘s research addresses why ad hoc privacy approaches fail and what’s needed instead: mathematical proofs that guarantee privacy will be maintained, even in the presence of sophisticated adversaries. As a Vector Faculty Member and professor at McMaster University, he works on the theoretical foundations that make AI systems provably private, robust, and trustworthy.
This isn’t purely academic work. In domains like health care and autonomous systems, we need guarantees that go beyond “probably good enough.” When AI systems make consequential decisions – diagnosing diseases, approving loans, or controlling vehicles – we need to prove mathematically that they’ll maintain privacy, resist attacks, and behave reliably under real-world conditions.
From student to Faculty Member: A Vector journey
Ashtiani’s relationship with Vector began in 2018 as a PhD student at the University of Waterloo, when he joined the inaugural postgraduate affiliate program. This early exposure introduced him to Ontario’s emerging AI research ecosystem at a formative moment in his career. After graduating and accepting a faculty position at McMaster University in Hamilton, he transitioned to Faculty Affiliate status in 2020, maintaining his ties to Vector even as he built his independent research group, Artificial Intelligence for Chemical Sciences.
Those five years as an affiliate provided crucial infrastructure for an early-career researcher. Vector offered opportunities to organize workshops and reading groups, access to compute resources for experiments, and pathways to connect his students with researchers at other institutions. Perhaps most valuable were the informal opportunities – bumping into another researcher at a Vector event and discovering overlapping interests, or being able to easily explore collaborative projects across universities because the shared affiliation provided natural common ground, and practical support.
For Ashtiani, Vector’s greatest value has always been connection and collaboration. Before moving to Canada, building research connections in Tehran meant overcoming significant barriers in travel, limited resources, and isolation from the broader research community. Working at Waterloo, then McMaster, the Vector affiliation fundamentally changed what was possible. When both researchers in a potential collaboration are Vector-affiliated, the practical friction of cross-institutional work – where to meet, how to access shared resources, whether there’s institutional support – largely disappears. The infrastructure enables the kind of spontaneous collaboration that produces unexpected breakthroughs.
“I really enjoyed my time at Vector because you get connected to people that are like-minded. If both of you are at Vector, you can just book a room and just talk to them. It’s much easier to collaborate with someone outside your university.”
Designing systems that don’t require reinventing the wheel
A central challenge in privacy-preserving machine learning is efficiency: must researchers develop completely new methods for every application? If a standard machine learning technique works well for non-private data analysis, is there a way to adapt it for privacy-preserving contexts without starting from scratch?
Ashtiani’s work on “black box reductions” addresses this directly. A black box, in computer science and machine learning, is a system where you can observe inputs and outputs but don’t need to understand the internal workings. A black box reduction means you can take any existing machine learning method – treating it as a black box – and wrap it in a framework that guarantees privacy, without needing to redesign the method itself.
This approach means the vast library of machine learning techniques developed for non-private settings can potentially be adapted for privacy-preserving applications efficiently. Rather than reinventing every algorithm, researchers can leverage existing, well-tested methods while adding mathematical privacy guarantees. This dramatically accelerates the development of private machine learning systems.The work earned recognition with a NeurIPS 2018 Best Paper Award for research on learning Gaussian mixtures – a foundational problem in statistics and machine learning. The paper introduced new theoretical concepts like sample compression schemes for distributions that advanced understanding of how much data we fundamentally need to learn complex models.
Enabling Canada’s AI ecosystem while protecting Canadians’ privacy
The practical stakes of this research extend directly to Canada’s ability to develop and deploy AI systems responsibly. Ontario and Canada have built a thriving AI ecosystem, but that ecosystem faces real constraints around data access. People are rightfully concerned about sharing personal information, and regulations increasingly require strong privacy protections. For applications in health care especially, better diagnostic systems require patient data, but patients need trustworthy guarantees that their private medical information won’t be exposed.
Ashtiani’s research helps resolve this tension by developing methods that provide formal, provable privacy guarantees while still enabling meaningful machine learning from sensitive data. If we can keep Canadians’ data more private while still enabling better machine learning systems for medical applications and other domains, we simultaneously strengthen both privacy protection, and the AI ecosystem’s ability to thrive.
The work isn’t finished. Current privacy-preserving machine learning methods often involve tradeoffs between privacy strength and model accuracy. Ashtiani’s ongoing research aims to narrow this gap, developing approaches that maintain strong privacy while achieving accuracy closer to non-private systems.
His advice for other researchers is straightforward: “I really encourage people that are outside the Vector community, if they’re machine learning researchers or AI researchers or are interested in these domains, to join Vector.” The connection to like-minded researchers, the collaborative infrastructure, and the ability to work across institutional boundaries make Vector a valuable part of Ontario’s AI research landscape. His seven-year progression from student affiliate to Faculty Member illustrates the pathway Vector offers for researchers at different career stages.
“If you really want to have better machine learning systems for, for example, medical applications, you really need trustworthy methods that people can rely on and share their data in order to find better solutions. And if we can do that, we will keep Canadians’ data more private. At the same time, we help the AI ecosystem thrive in Canada.”