Part 2: Endless Summer School: AI Model Governance
November 3 @ 10:00 am - 12:00 pm
Technical research and the legal context for robust and secure AI models.
This two-part interdisciplinary series brings together legal and technical aspects of machine learning. The two sessions will kick-off with a legal perspective on current laws that deal with data privacy and machine learning IP, followed by talks from Vector researchers who will share their latest work on model governance. This seminal work addresses some of the legal questions that arise when operationalizing AI, such as: Can a machine unlearn? Can you share a model and protect your IP? How are optimization and training efficiency linked to adversarial robustness?
Session 1: October 13, 2021
- A legal perspective on current data privacy laws: David Young
- Overview of trustworthy ML: Varun Chandrasekaran
- Machine unlearning: Nick Jia
- Model stealing: Mohammad Yaghini
Session 2: November 3, 2021
- IP Law and Machine Learning: Matt Norwood
- Confidential and Private Collaborative Learning: Adam Dziedzic
- Proof of Learning: Natalie Dullerud
Matt Norwood is a Partner with Ridout & Maybee LLP’s Toronto office, with a practice focused on machine learning and semiconductor patents. Matt has a background in natural language processing, machine vision, and F/OSS. He has an M. Eng. in EECS and a B.S. in Brain and Cognitive Science from MIT, and a J.D. from Columbia.
Adam Dziedzic is a postdoctoral researcher at the Vector Institute and the University of Toronto, advised by Prof. Nicolas Papernot. He earned his PhD at the University of Chicago, where he was advised by Prof. Sanjay Krishnan and carried out research on the Band-Limited convolutional neural networks as well as the out-of-distribution robustness of pre-trained transformers. Adam obtained his Bachelor’s and Master’s degrees from Warsaw University of Technology in Poland. He also studied at DTU (Technical University of Denmark) and carried out research on databases in the DIAS group at EPFL, Switzerland. He was a PhD intern at Microsoft Research and worked on recommendation of hybrid physical designs (B+ trees and Columnstores) for SQL Server. He also had internships at CERN (Geneva, Switzerland), Barclays Investment Bank (London, UK), and Google (Madison, USA).
Natalie Dullerud is a Master’s student and machine learning researcher at the University of Toronto. She previously graduated magna cum laude with a Bachelor’s degree in Mathematics from the University of Southern California, with minors in computer science and chemistry. During her undergraduate studies, Natalie studied abroad at Oxford University and completed a thesis on optimal sustainable solutions to reducing maternal mortality in Sierra Leone. At the University of Toronto, Natalie was awarded a Junior Fellowship at Massey College, and recently, completed a graduate research internship at Microsoft Research. Natalie’s research largely focuses on differential privacy, algorithmic fairness, and applications to clinical and biological settings. Her work encompasses development of machine learning approaches for a broad range of tasks, including clustering of longitudinal immunological data, optimal immunotherapy dose scheduling, analysis of differentially private synthetic data, and private collaborative learning. She has been published at several high-profile conferences, including IEEE CDC, ICLR, FAccT, and IEEE Security and Privacy.
This event is open to Vector Sponsors, Researchers and Students only. Any registration that is found not to be a Vector Sponsor, Researcher or Student will be asked to provide verification and, if unable to do so, will not be able to attend the event.