Loading Events

All Events

  • This event has passed.

Endless Summer School: Trustworthy AI

June 7, 2022 @ 10:00 am - 12:00 pm

In order for AI to be ethical, it must respect our data and insights about us, and it must be transparent and explainable. Join Vector Researchers and Industry leaders who will share the latest advances in building trustworthy AI.




Opening Remarks with MC Michael Page, Director of Industry Innovation, Vector Institute


Machine Learning Governance by Mohammad Yaghini,  PhD Student at CleverHans Lab, Vector Institute & University of Toronto

The application of machine learning (ML) in computer systems introduces not only many benefits but also risks to society. In this talk, we present the concept of ML governance to balance such benefits and risks, with the aim of achieving responsible applications of ML. Our approach first systematizes research towards ascertaining ownership of data and models, thus fostering a notion of identity specific to ML systems. Building on this foundation, we use identities to hold principals accountable for failures of ML systems through both attribution and auditing. To increase trust in ML systems, we then survey techniques for developing assurance, i.e., confidence that the system meets its security requirements and does not exhibit certain known failures. This leads us to highlight the need for techniques that allow a model owner to manage the life cycle of their system, e.g., to patch or retire their ML system. Put altogether, our systematization of knowledge standardizes the interactions between principals involved in the deployment of ML throughout its life cycle.


Integrating Machine Learning and Control Theory for Safe and Efficient Robot Decision Making by SiQi Zhou, Postdoctoral Fellow, Vector Institute

Autonomous robots are envisioned to become reliable human companions in domains ranging from industrial applications to our daily lives. In the literature, well-established control techniques provide the foundation for designing high-performance autonomous robot systems with desired theoretical guarantees. However, these control techniques often rely on a dynamics model of the robot, and any inaccuracies of the model can result in suboptimal performance or even unsafe actions. This limitation motivates the incorporation of learning into the traditional robot decision-making software stack. In our work, departing from control theory, we develop neural control approaches that safely and efficiently exploit the expressiveness of neural networks to enhance the performance of robots in uncertain environments. This talk will encompass a set of our neural control work ranging from offline inverse dynamics learning for improving the performance of robots to online Lipschitz network adaptation for closing the model-reality gap in uncertain robot systems. We demonstrate our approach in real-time robot experiments, including quadrotor impromptu trajectory tracking and flying inverted pendulum. In this talk, I will also briefly introduce our recent review paper on safe learning in robotics and conclude with directions for future research.



Implementing Trustworthy AI at BMO by Kevin Yu and Shawn Tumanov, Data & Analytics Governance teams at BMO

Since BMO’s journey begun several years ago to develop and implement a framework around Trustworthy AI, the proliferation of AI solutions at the bank has grown drastically, particularly in developing solutions in-house. With it has come practical challenges of concretely defining what “trustworthiness” means with regards to AI, and how it fits alongside the existing frameworks at the bank to identify and mitigate risks. Shawn & Kevin will share the journey that BMO is currently and some of the insights, both opportunities and challenges, that have been discovered along the way.


Trustworthy AI Projects at Vector with Erik Garcia, Senior Project Manager, Vector Institute

Join Erik Garcia, Senior Manager of Industry Innovation Projects as we walk through some of the key takeaways from Vector Institute’s applied AI projects in privacy enhancing technologies (PETs)




This event is open to Vector Sponsors, Vector Researchers, and invited health partners only. Any registration that is found not to be a Vector Sponsor, Vector Researcher or invited health partner will be asked to provide verification and, if unable to do so, will not be able to attend the event. Please contact with any questions.



Headshot of Erik GarciaErik Garcia is a senior manager of Industry Innovation projects at the Vector Institute for artificial intelligence. He acts as the interface between external industry sponsors and Vector’s internal research activities and ensures the coordination and delivery of collaborative AI projects across multiple industries. Previously, Erik held senior positions in the financial services and IT/management consulting industries. Erik has an MBA from Western University.




Michael PageMichael Page is a Director of Industry Innovation at the Vector Institute for artificial intelligence. His work focuses on commercialization strategies and accelerating AI adoption and capabilities across many industries. Previously, he held senior roles at the University of Toronto leading teams and executing academic partnerships. Michael has over 15 years of experience building and leading corporate strategy for innovation, social impact, and R&D for a variety of organizations.  Michael has an Executive MBA from the Ivey School of Business, BA from the University of Toronto and volunteers his time mentoring students in business, social impact, and innovation.



Mohammad Yaghini is a PhD student at the CleverHans Lab at the Vector Institute for Artificial Intelligence and the University of Toronto, supervised by Nicolas Papernot. He is also a graduate fellow at the Schwartz Reisman Institute for Technology and Society and a Meta Fellow. His research interests are in the intersection of machine learning and privacy, and more broadly, trustworthy machine learning. In particular, he studies problems of model governance. He has recently tackled questions of protecting the intellectual property of ML models through detecting and deterring model extraction (via dataset inference and proofs of learning).




headshot of Kevin YuKevin Yu is a Senior Manager in the Data & Analytics Governance teams at BMO, with a specific focus on developing the governance framework around analytics solutions, which includes the development, purchase and use of AI solutions at the bank. Prior to joining BMO, Kevin worked as an analytics consultant helping clients implement customer analytics and AI solutions. Kevin is also a military reservist and most recently was posted to a multinational taskforce in Riga, Latvia where he led several analytics PoC focuses on developing business intelligence solutions.





SiQi Zhou is a Ph.D. Candidate at the University of Toronto Institute for Aerospace Studies (UTIAS) and a Postdoctoral Researcher at the Vector Institute. She is affiliated with the University of Toronto Robotics Institute and the NSERC Canadian Robotics Network. SiQi received her B.A.Sc. degree from the University of Toronto Engineering Science program in 2016, after which she joined Prof. Angela Schoellig’s Dynamic Systems Lab for her doctoral degree. Her research lies at the intersection of robotics, machine learning, and system control. By integrating learning techniques and control theory, she aims to develop approaches that safely and efficiently improve the performance of autonomous robots in uncertain and unstructured environments. SiQi was a recipient of the NSERC Alexander Graham Bell Canada Graduate Scholarship and was selected as one of the MIT Rising Stars in 2021.



Vector Institute Professional Development
Scroll to Top