- This event has passed.
Endless Summer School: Cybersecurity & PhD Talent Spotlight
November 25, 2020 @ 10:00 am - 12:00 pm
PhD Talent Spotlight
The PhD Talent Spotlight showcases the research of Vector PhD and postdoctoral fellows who will be entering the workforce over the next 12 months. These research presentations are an opportunity to hear some of the latest AI research and are designed to forge connections with leading up-and-coming talent, for potential future hiring or collaboration.
If you are building ML teams or are part of a technical team at a Vector industry sponsor or health partner, register to join us on November 25th, 2020.
Model-Based Reinforcement Learning for Intelligent Assistants
PhD Candidate (University of Guelph), MBA (University of Toronto), BSc in Computer Science (Ryerson University)
PhD Supervisor: Graham Taylor
In this talk, I will discuss the idea of augmenting a reinforcement learning agent with a world model that is used for planning, which is known as model-based reinforcement learning (MBRL). I will explain how planning can help the agent evaluate possible actions by imagining hypothetical scenarios based on the model and then computing their expected future outcomes. Further, I will describe why finding solutions for planning with MBRL will bring us one step closer towards the ultimate goal of intelligent assistants that fully and functionally understand the real world around them.
Smooth Games in Modern Machine Learning
PhD Candidate (University of Waterloo), MSc in Physics (University of Waterloo), BSc in Physics (University of Science and Technology of China)
Smooth games, in particular two-player sequential games (a.k.a. minimax optimization), have been an important modeling tool in applied science and received renewed interest in machine learning. In this talk, I will discuss several applications of smooth games in modern machine learning and then briefly talk about its solution concepts. After that, I will mention challenges in finding an optimal solution and discuss the intuitions of first-order and second-order methods for tackling such problems.
Stable Learning-Based Controllers for Accurate Tracking in Changing Environments
Postdoc (University of Toronto), PhD (University of Toronto), M.Eng (University of New Sotuh Wales), Lic., Eng (Instituto Tecnológico y de Estudios Superiores de Monterrey)
Postdoc Supervisor: Angela Schoellig
Introducing robots to changing environments requires sophisticated controllers that can guarantee high overall performance in the presence of modeling errors, disturbances and changing dynamics. Learning-based controllers can improve robot performance in changing environments; however, these controllers must remain stable. In this talk, I will present learning-based controllers that can guarantee stability and high overall performance even in the presence of modeling errors, disturbances and changing dynamics. The iterative learning framework improves performance in a few iterations and is formulated to enable data-efficient multi-robot, multi-task transfer learning. The online learning framework uses data gathered online to improve performance at runtime while guaranteeing stability. Experiments on drones for both frameworks are presented.
Artificial Intelligence: How Experimental Psychology Can Help Generate Explainable AI
Postdoc (Vector Institute), PhD (Purdue University), BSc in Psychology (Western University)
Postdoc Supervisor: Graham Taylor
Artificial intelligence powered by deep neural networks has reached a level of complexity where it can be difficult or impossible to express how a model makes its decisions. This black-box problem is especially concerning when the model makes decisions with consequences for human well-being. In response, computer scientists have developed a field called explainable artificial intelligence (XAI) that aims to increase the interpretability and transparency of machine learning. In this talk, I will review how cognitive psychologists can make complementary contributions to XAI. The human mind is also a black box, and cognitive psychologists have over one hundred and fifty years of experience modeling it through experimentation. We ought to translate the methods and rigour of cognitive psychology to the study of artificial black boxes in the service of explainability. I will provide a review of XAI for psychologists, arguing that current methods possess a blind spot that can be complemented by the experimental cognitive tradition. We also provide a framework for research in XAI, highlight exemplary cases of experimentation within XAI inspired by psychological science, and demonstrate a use-case by inferring model properties of feature processing in a CNN trained for image recognition by measuring its response time.
Cybersecurity and Privacy: Complements for a More Secure Internet