- This event has passed.
Endless Summer School: AI Model Governance
October 13 @ 10:00 am - 12:00 pm
Technical research and the legal context for robust and secure AI models.
This two-part interdisciplinary series brings together legal and technical aspects of machine learning. The two sessions will kick-off with a legal perspective on current laws that deal with data privacy and machine learning IP, followed by talks from Vector researchers who will share their latest work on model governance. This seminal work addresses some of the legal questions that arise when operationalizing AI, such as: Can a machine unlearn? Can you share a model and protect your IP? How are optimization and training efficiency linked to adversarial robustness?
Session 1: October 13, 2021
- A legal perspective on current data privacy laws: David Young
- Overview of trustworthy ML: Varun Chandrasekaran
- Machine unlearning: Nick Jia
- Model stealing: Mohammad Yaghini
Session 2: November 3, 2021
- A legal perspective on ML and IP: Matt Norwood
- Confidential and Private Collaborative Learning – Adam Dziedzic
- Proof of Learning: Natalie Dullerud
David Young is Principal at David Young Law, a privacy and regulatory counsel practice. He has been advising clients on privacy issues since prior to the enactment of Canada’s private sector privacy laws. He advises both private and public sectors on all aspects of privacy law including compliance procedures, consent, data sharing, employee privacy, personal health information, security, breach response and access to information. David advises on marketing matters including digital advertising, anti-spam, social media and food and drug law. His practice also includes corporate regulatory compliance.
David is a co-author of Canadian Advertising and Marketing Law (Carswell) and Marketing Communications Services Agreement – Commentary and Model Agreement 5th ed. (Association of Canadian Advertisers).
David is the 2015 recipient of the Ontario Bar Association’s Karen Spector Memorial Award for Excellence in Privacy Law. He is recognized as a ranked lawyer in the Canadian Legal Lexpert Directory, Chambers & Partners 2021 Canada Guide and The Best Lawyers in Canada. David is a member of the Canadian Marketing Association’s Ethics and Standards Committee. David is a director of the Canadian Helen Keller Centre.
Varun Chandrasekaran is a doctoral candidate at the University of Wisconsin-Madison, where he frequently works with Suman Banerjee, Somesh Jha, Kassem Fawaz, and Nicolas Papernot. His areas of research interest are at the intersection of security, privacy, systems, and machine learning. In particular, his work aims to understand the theoretical limitations of ML deployments to design practical interventions. Varun obtained his MS in Computer Science from the Courant Institute of Mathematical Sciences (NYU) under the supervision of Lakshminarayanan Subramanian. Varun is an external researcher at the Privacy-preserving Data Analysis group at the Turing Institute, a former Lawrence H. Landweber Fellow (at UW-Madison), and has spent time at Telefonica Research, Microsoft Research, IBM Research, and AT&T Research. He is interested in pursuing research positions in academia (tenure-track) and industry, starting summer/fall 2022.
Email: chandrasekaran at cs.wisc.edu
Nick Jia is a Master’s student at the CleverHans Lab at the Vector Institute and University of Toronto, supervised by Prof. Nicolas Papernot. He joined the lab in the 4th year of his undergraduate study at the Program of Engineering Science at University of Toronto.
His research interest lies in the broad study of machine learning, but he focuses specifically on the area of adversarial machine learning and how to improve the trustworthiness of machine learning algorithms (e.g., their security). In particular, He has studied how the “right to be forgotten” can be achieved in the context of deep learning, and how a model owner can prove his/her ownership of a machine learning model.
Mohammad Yaghini is a PhD student at the CleverHans Lab at the Vector Institute for Artificial Intelligence and the University of Toronto, supervised by Nicolas Papernot. He is also a graduate fellow at the Schwartz Reisman Institute for Technology and Society. His research interests are in the intersection of machine learning and privacy, and more broadly, trustworthy machine learning. In particular, he studies problems of model governance. He has recently tackled questions of protecting the intellectual property of ML models through detecting and deterring model extraction (via dataset inference and proofs of learning).
Previously, he was at the SPRING at EPFL, where he obtained his master’s in Data Science supervised by Carmela Troncoso and Boi Faltings. He completed his master thesis on learning context-dependent fairness measures at LAS at ETH Zurich, under the supervision of Hoda Heidari and Andreas Krause.
This event is open to Vector Sponsors, Researchers and Students only. Any registration that is found not to be a Vector Sponsor, Researcher or Student will be asked to provide verification and, if unable to do so, will not be able to attend the event.