Vector attracts the world’s most accomplished and innovative AI and machine learning researchers
Our renowned research community is advancing breakthroughs in the science and application of AI. From using quantum computing to address climate change, to developing new machine learning models for 3D applications, harnessing AI to improve food price forecasting, and more, Vector researchers are unlocking new ways to apply AI to drive better economic, health, and societal outcomes.
Our strategic research priorities
- Machine Learning
- Deep Learning
- AI for Science
- Trustworthy AI
- AI for Health
- Foundation Models
Vector is advancing its goal of becoming a top 10 global centre for AI research by attracting the world’s most accomplished, ambitious, and innovative researchers who are unlocking new achievements across a wide range of AI and machine learning topics.
860 Members of the Vector research community, comprising:
Our growing research team
What was once only a few founding faculty has evolved over the last five years into a flourishing community comprising over 700 researchers who are pushing the boundaries of AI, machine learning, and deep learning in critical areas to benefit Ontarians, Canadians, and people around the world.
We drive this growth through new and expanding efforts to attract and develop an outstanding community
- Fostering collaboration between industry and academia to connect leading research and AI applications.
- Creating more ways for researchers to work with industry sponsors and health sector partners on real-world problems and novel data sets.
- Expanding access to events focused on research and applications and increasing access to internships.
Published research
In dozens of timely, globally relevant, and impactful projects and work themes, these researchers are unlocking new ways to apply AI to drive better economic, health, and societal outcomes.
Latest research news
My Visiting Researcher Term at Vector Institute
Vector researchers presenting more than 98 papers at NeurIPS 2024
Unlocking the Potential of Prompt-Tuning in Federated Learning
New multimodal dataset will help in the development of ethical AI systems
Unveiling Alzheimer’s: How Speech and AI Can Help Detect Disease
Vector co-founder Geoffrey Hinton wins the Nobel Prize in Physics 2024
Health Research
Improving health outcomes for everyone
Vector helps improve population health outcomes by creating an AI ecosystem that fosters innovation, enables better data collection and analysis, addresses staffing challenges, reduces wait times, and improves patient lives and care.
Research talks
Vector Distinguished Lectures Series
The Vector Distinguished Lecture Series by the Vector Institute is a public talk series for academic and industrial data scientists in the GTA to discuss advanced machine learning topics.
Watch past Vector Distinguished Lectures Series talks
“Improving AI Decision Support with Interpretability and Interaction”
Finale Doshi-Velez
Herchel Smith Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences
“Symbolic, Statistical and Causal AI”
Bernhard Schölkopf
Director, ELLIS Institute Tuebingen
Professor, ETH
“Promise and Pitfalls of Public Data in Private ML” | Gautam Kamath, Vector Faculty Member
Talk abstract:
Machine learning models are frequently trained on large-scale datasets, which may contain sensitive or personal data. Worryingly, without special care, these models are prone to revealing information about datapoints in their training set, leading to violations of individual privacy. To protect against such privacy risks, we can train models with differential privacy (DP), a rigorous notion of individual data privacy. While training models with DP has previously been observed to result in unacceptable losses in utility, I will discuss recent advances which incorporate public data into the training pipeline, allowing models to guarantee both privacy and utility. I will also discuss potential pitfalls of this approach, and directions forward for the community.
“Troubling Trajectories for Uncertainty Quantification and Decision Making with Neural Networks” | Geoff Pleiss, Vector Faculty Member
Talk abstract:
In safety-critical settings and decision making tasks, it is often crucial to quantify the predictive uncertainty of machine learning models. Uncertainty estimates not only codify the trustworthiness of predictions, but also identify regions of the input space that would benefit from additional exploration. Unfortunately, quantifying neural network uncertainty has proven to be a longstanding challenge. In this talk, I will discuss criteria (beyond calibration) of uncertainty estimates that provide meaningful utility on downstream outcomes and tasks. I will demonstrate where existing methods fall short, and – more troublingly – I will discuss recent evidence that their efficacy will further decline as neural networks continue to grow in capacity. I will conclude with ideas for future directions, as well as a call for radically different uncertainty quantification approaches.
“Build an Ecosystem, Not a Monolith” | Colin Raffel, Vector Faculty Member
Talk abstract:
Currently, the preeminent paradigm for building artificial intelligence is the development of large, general-purpose models that aim to be able to perform all tasks at (super)human level. In this talk, I will argue that an ecosystem of specialist models would likely be dramatically more efficient and could be significantly more effective. Such an ecosystem could be built collaboratively by a distributed community and be continually expanded and improved. In this talk, I will outline some of the technical challenges involved in creating model ecosystems, including automatically selecting which models to use for a particular task, merging models to combine their capabilities, and efficiently communicating changes to a model.
“Graph Neural Networks meet Spectral Graph Theory: A Case Study” | Renjie Liao, Vector Faculty Member
Talk abstract:
Talk abstract is unavailable
“Random matrix theory for high dimensional optimization, and an application to scaling laws” | Elliot Paquette, Associate Professor at McGill University
Talk abstract:
We describe a program of analysis of stochastic gradient methods on high-dimensional random objectives. We illustrate some assumptions under which the loss curves are universal, in that they can completely be described in terms of some underlying covariances. Furthermore, we give description of these loss curves that can be analyzed precisely.
We show how this can be applied to SGD on a power-law-random-features model. This is a simple two-hyperparameter family of optimization problems, which displays 5 distinct phases of loss curves; these phases are determined by the relative complexities of the target, data distribution, and whether these are ‘high-dimensional’ or not (which in context can be precisely defined). In each phase, we can also give, for a given compute budget, the optimal random-feature dimensionality.
Joint work with Courtney Paquette (McGill & Google Deepmind), Jeffrey Pennington (Google Deepmind), and Lechao Xiao (Google Deepmind).
“AI Accelerating Scientific Understanding: Neural Operators for Learning on Function Spaces” | Anima Anandkumar, Bren professor at Caltech
Talk abstract:
Language models have been used for generating new ideas and hypotheses in scientific domains. For instance, language models could suggest new drugs or engineering designs. However, this is not sufficient to attack the hard part of science which is the physical experiments needed to validate the proposed ideas. This is because language models lack physical validity and the ability to internally simulate the processes. Traditional simulation methods are too slow and infeasible for complex processes observed in many scientific domains. We propose AI-based simulation methods that are 4-5 orders of magnitude faster and cheaper than traditional simulations. They are based on Neural Operators which learn mappings between function spaces, and have been successfully applied to weather forecasting, fluid dynamics, carbon capture and storage modeling, and optimized design of medical devices, yielding significant speedups and improvements.
Keep up with the latest AI trends and research
Get all the latest AI news, advancements, and events straight into your inbox. Sign up for our monthly newsletter.