Vector Research Symposium showcases cutting-edge machine learning developments
March 26, 2021
March 26, 2021
March 26, 2021
By Ian Gormely
The Vector Institute’s annual Research Symposium was held in February, a two-day event showcasing the latest cutting-edge work coming out of the Vector research community.
“Over the last year, Vector researchers have balanced long-term fundamental research questions with the agility and ability to pursue targets of opportunity on socio-economic issues to benefit the lives of Canadians and the global community,” says Vector Research Director Richard Zemel.
The event included remarks from Vector CEO Garth Gibson and Vector Research Director and Canada CIFAR AI Chair Richard Zemel, as well as presentations from Vector Faculty Members and CIFAR AI Chairs Alan Aspuru-Guzik, Sheila McIlraith, Anna Goldenberg, Animesh Garg, and Nicholas Papernot.
Vector Faculty Member and Canada CIFAR AI Chair Graham Taylor noted a number of trends across various ML and DL subfields. He says that a lot of recent work has been done around the phenomenon of “double descent” in deep neural network models which has caused a rethink of classic generalization theories. “There has been a flurry of papers in the last couple of years not devoted to pushing up the accuracy on benchmarks with fancy new architectures, but trying to figure out what the heck is going on inside our existing popular architectures.”
He points to the poster Vector Postgraduate Affiliate Anna Golubeva presented at the symposium as a good example of this kind of research. “We know that with deep neural networks, performance improves by increasing the number of parameters,” the part of the model that is learned from the training data. “Anna’s poster seeks to answer the question: is the increased performance due to the larger number of parameters, or is it due to the larger width?”
Regarding her work, Golubeva says “Doing AI/ML theory, progress is not as fast and the incremental results are not as impressive as in some applied ML fields,” she says. “It’s a long haul, but theoretical progress in AI is of crucial importance for everyone, because it depends on the progress in theory whether we can make AI trustable, reliable and fair.”
Taylor was also impressed with the self-driving chemistry lab pioneered by Aspuru-Guzik. “It can benefit from advances in robotics, sequential decision making, and generative models which are all active areas of research at Vector.”
That overlapping of disciplines within the machine learning field has emerged as a common theme, something that Papernot has seen within his own area of research – privacy and security – and particularly in the health AI space. “Deploying machine learning on many critical applications like health care will require strong guarantees of privacy,” he says. “This has led to a flurry of work on algorithms that can provide such guarantees, and in particular in a decentralized setting.”
For example, Papernot and his co-authors (including Vector researchers Christopher A. Choquette-Choo, Natalie Dullerud, and Adam Dziedzic) work on Confidential and Private Collaborative Learning (CaPC), a protocol for collaborative machine learning with strong guarantees of privacy and confidentiality. “This means that hospitals which trained models locally can now collaborate and jointly make predictions without revealing to one another the inputs they are predicting on, their models, or their training data.”
Again showing the interconnectedness of the research on display at the Symposium, Taylor liked Papernot’s CaPC work for a different reason. “There is a strong link between privacy and generalization,” he says. “You often need to trade robustness for performance. But new privacy-preserving frameworks like CaPC can actually improve generalization. That’s pretty exciting!”
Even though the Symposium was held virtually, it was a rare opportunity for the community, separated by the pandemic for the past year, to come together online. “The ability to stop in and talk to students about their research and see all the amazing research coming down the pipeline was a highlight for me,” says Zemel. “I am incredibly proud of what everyone has accomplished over the last year, especially given the circumstances.”
During the event, Vector’s research community heard talks from Faculty Members on some of the most important issues of the day and the future. They listened to poster sessions from members of Vector’s research community and spent time networking with peers, colleagues, and mentors during breaks.
Looking to the future, Zemel is excited by the prospect of once again working in the more spontaneous environment the Vector offices provide, as well as the progress that will be made through ongoing research across machine learning. He sees big breakthroughs in understanding deep learning models and shaping the representations that they form on the horizon. “I also expect big advances in important applications, such as material science, robotics, and healthcare,” he says, adding, “Vector researchers will lead the way, of course.”