Events

- This event has passed.
Computer Vision Symposium
October 27, 2021 - October 28, 2021
Day 1 – 10:00 am – 1:45 PM EST
Day 2 – 10:00 am – 1:45 PM EST
Join the Vector Institute to explore state-of-the-art impact of computer vision technologies.
This online event will feature Vector’s industry collaboration Computer Vision (CV) project technical talks and a panel discussion with Vector experts and industry practitioners, covering key areas in CV such as anomaly detection and transfer learning looking at their applications and impacts across sectors.
In January 2021, Vector Institute launched a multi-phase industrial-academic collaborative project focusing on recent advances in Computer Vision. To follow-up on the outcomes of the project, this 2-day symposium will include presentations and demonstrations by the project participants from both industry and the Vector research community.
Speakers include Olga Russakovsky from Princeton University and Arthur Berrill from RBC as opening keynotes followed by Leonid Sigal and Yalda Mohsenzadeh, Vector Faculty and Affiliate Faculty and academic advisors of the Vector’s industry collaboration computer vision project.
The Computer Vision Symposium will be feature the project outcome presentations form different sponsors participants as well as Vector’s student research lightning talks and a poster session.
October 27 Talks:
10:00 AM
Fairness in Visual Recognition: Redesigning the Datasets, Improving the Models and Diversifying the AI Leadership by Olga Russakovsky, Princeton University
- Abstract: Computer vision models trained on unparalleled amounts of data have revolutionized many applications. However, more and more historical societal biases are making their way into these seemingly innocuous systems. We focus our attention on two types of biases: (1) bias in the form of inappropriate correlations between protected attributes (age, gender expression, skin color, …) and the predictions of visual recognition models, as well as (2) bias in the form of unintended discrepancies in error rates of vision systems across different social, demographic or cultural groups. In this talk, we’ll dive deeper both into the technical reasons and the viable strategies for mitigating bias in computer vision. I’ll highlight a subset of our recent work addressing bias in visual datasets (FAT*2020 http://image-net.org/filtering-and-balancing/; ECCV 2020 https://github.com/princetonvisualai/revise-tool; recently https://image-net.org/face-obfuscation/), in visual models (CVPR 2020 https://arxiv.org/abs/1911.11834; CVPR 2021 https://princetonvisualai.github.io/gan-debiasing/; ICCV 2021 https://princetonvisualai.github.io/imagecaptioning-bias/), in evaluation metrics (ICML 2021 https://arxiv.org/abs/2102.12594) as well as in the makeup of AI leadership (http://ai-4-all.org).
Referring Transformer: A One-step Approach to Multi-task Visual Grounding by Leonid Sigal, Vector Institute
- Abstract: As an important step towards visual reasoning, visual grounding (e.g., phrase localization, referring expression comprehension/segmentation) has been widely explored Previous approaches to referring expression comprehension (REC) or segmentation (RES) either suffer from limited performance, due to a two-stage setup, or require the designing of complex task-specific one-stage architectures. In this talk I will describe a simple one-stage multi-task framework for visual grounding tasks by leveraging a transformer architecture, where two modalities are fused in a visual-lingual encoder.
11: 15 AM Project Presentations
Computer Vision Applications in Anomaly Detection and Semantic Segmentation Elham Ahmadi, Senior Data Scientist, RBC Jinbao Ning, Technical Specialist, System Analyst, Thales With the advent of deep learning, supervised computer vision models have attained remarkable performance on a wide variety of tasks in computer vision such as classification, object detection and segmentation. This enables numerous commercial applications in a variety of domains. In particular, image segmentation is a central component of many systems including autonomous driving, aerial inspection and industrial inspection systems. Unfortunately, traditional deep learning based image segmentation systems require a large amount of pixel-wise labels; which are often costly and time consuming obtain. Inspired by these drawbacks, we explored both traditional semantic segmentation approaches and anomaly segmentation approaches, and their application in practical settings. |
Automated Traffic Incident Detection with Two-Stream Neural Networks Andrew Alberts, Data Scientist, Intact Matthew Kowal, Graduate Researcher, Vector/York University Traffic accidents are a major source of damage, injury, and death and cause millions of dollars in repair and litigation costs each year. Much of this cost is borne by insurance companies. While insurers aim to adequately price policies, a lack of visibility of on road behaviours can lead to erroneous pricing of risks and the inability to create loss prevention programs. Effective deployment of an automated system which could accurately detect and quantify these anomalous behaviours could provide insurance companies and other interested parties important information on driving behaviour without having to be physically present in the vehicle. To this end, this report develops a deep learning based approach to detect and classify anomalous events in dash cam footage. We use the recently proposed YOWO (You Only Watch Once) model, a two stream convolutional neural network. Quantitative and qualitative experiments on a large scale dataset demonstrate the efficacy of our approach. We end the report with a discussion on the limitations of the model and the future work required to successfully deploy such a system in the real world.
|
Demo: Anomaly Detection in Manufacturing Tristan Trim, Integration Engineer, Linamar John Jewell, Graduate Research, University of Western Ontario |
12:00 PM Research Presentations
Quantifying Static and Dynamic Biases in Spatio-temporal models by Mennatullah Siam, Postdoctoral Researcher / York University
Unconstrained Scene Generation with Locally Conditioned Radiance Fields by Terrance DeVries, Vector Graduate Researcher / University of Guelph
October 28 Talks:
10:00 AM
Understanding, Predicting, and Manipulating Image Memorability with Representation Learning by Yalda Mohsenzadeh, Western University
- Abstract: Everyday, we are bombarded with hundreds of images on our smart phone, on television, or in print. Recent work shows that images differ in their memorability, some stick in our mind while others are fade away quickly, and this phenomenon is consistent across people. While it has been shown that memorability is an intrinsic feature of an image, still it’s largely unknown what features make images memorable. In this talk, I will present a series of our studies which aim to address this question by proposing a fast representation learning approach to modify and control the memorability of images. The proposed method can be employed in photograph editing applications for social media, learning aids, or advertisement purposes.
Visions for the Future – Unusual uses of Imagery by Arthur Berrill, RBC
- Abstract: In this talk I will cover the intersection of Computer Vision (CV) with the financial technology and the banking sector. The first section will address “Beyond RGB” where I will go through remote sensing technologies such as SAR and Hyperspectral Imagery. We will lay out what such technologies imply for the financial sector to and specifically to power the AI enabled banking. Next, we will cover one of our bleeding-edge RBC products, called property valuator, and the novel CV components thereof in great detail, including our leading footprint boundary detection algorithm.
11:15 AM Project Presentations
Identifying Clinically Relevant Features of Interest in Cholecystectomy Procedures Kuldeep Panjwani, Software Engineer – AI Lab, Telus Shuja Khalid, Graduate Researcher, Vector/University of Toronto Laparoscopic cholecystectomy is one of the most common surgeries performed in modern medicine (approximately 200,000/year in the US). Complications during difficult operations can result in longer recoveries, long-term disabilities and even death. The goal of the operation is to remove the gallbladder, and not injure other critical organs or structures that are in close proximity, such as the liver, intestine, bile ducts, arteries and the portal vein. The goal of safe cholecystectomy has been based on the Critical View of Safety (CVS), which is defined by the complete and safe dissection of the cystic duct and artery at the base of the gallbladder. The goal of the project is to develop a near real-time tool for identifying critical regions during surgery to assist surgeons to arrive safely to the CVS. In addition, we’d like to ascertain critical features such as instruments, surgical actions and anatomical targets during the course of surgery. We introduce an additional model for predicting these dynamic features. |
Demo: Video Classification Using COVID-19 Ultrasound Xin Li, Graduate Researcher, University of Toronto / Vector Gerald Shen, Associate Applied Machine Learning Specialist, Vector Institute |
Transfer Learning for Efficient Video Classification/Detection Raghav Goyal, PhD Student, University of British Columbia Training temporal action detection in videos requires large amounts of labeled data, yet such annotation is expensive to collect. In this work our aim is to leverage transfer learning to deal with temporal action detection using unlabeled data. This talk will summarize the work of establishing a pipeline for action detection using the Slow-Fast model and exploring transfer learning techniques using linguistic and visual similarities for cross-dataset transfer or zero-shot setting. |
12:00 PM PANEL DISCUSSION: Challenges and Opportunities of CV in Industry
Moderator: Deval Pandya, Director, AI Engineering, Vector Institute
- Panelists:
Bahar Sateli, Senior Manager in AI & Advanced Analytics, PwC
Miti Modi, Data Science Manager, Intact
Frank Rudzicz, Director of AI, Surgical Safety Technologies / Faculty Member,
University of Toronto /Faculty Member, Vector Institute
Veronica Marin, Manager, Advanced Algorithms Group, THALES Group Canada
Speaker Biographies:
Dr. Olga Russakovsky is an Assistant Professor in the Computer Science Department at Princeton University. Her research is in computer vision, closely integrated with the fields of machine learning, human-computer interaction and fairness, accountability and transparency. She has been awarded the AnitaB.org’s Emerging Leader Abie Award in honor of Denice Denton in 2020, the CRA-WP Anita Borg Early Career Award in 2020, the MIT Technology Review’s 35-under-35 Innovator award in 2017, the PAMI Everingham Prize in 2016 and Foreign Policy Magazine’s 100 Leading Global Thinkers award in 2015. In addition to her research, she co-founded and continues to serve on the Board of Directors of the AI4ALL foundation dedicated to increasing diversity and inclusion in Artificial Intelligence. She completed her PhD at Stanford University in 2015 and her postdoctoral fellowship at Carnegie Mellon University in 2017.
Website: http://cs.princeton.edu/~olgarus
Dr. Leonid Sigal is an Associate Professor in the Department of Computer Science at the University of British Columbia. He is also a Canada Research Chair (CRC II) in Computer Vision and Machine Learning and a remote Faculty Member of the Vector Institute for AI in Toronto. In addition, He serves as an Academic Advisor to Borealis AI. Prior to this he was a Senior Research Scientist at Disney Research Pittsburgh and an Adjunct Faculty member at Carnegie Mellon University. His research focuses on problems of visual understanding and reasoning. This includes object recognition, scene understanding, articulated motion capture, motion modeling, action recognition, motion perception, manifold learning, transfer learning, character and cloth animation and a number of other directions on the intersection of computer vision, machine learning, and computer graphics.
Yalda Mohsenzadeh is an Assistant Professor in the Department of Computer Science and a core member of the Brain and Mind Institute at Western University, London, ON, Canada. She is also a faculty affiliate with Vector Institute for Artificial Intelligence, Toronto, ON, Canada. Before joining Western, she was a postdoctoral associate in the Computer Science and Artificial Intelligence Lab (CSAIL) and McGovern Institute for Brain Research at MIT, Cambridge, MA, USA. Prior to that, she was a postdoctoral fellow in the Center for Vision Research at York University, Toronto, ON, Canada. Yalda received her PhD in statistical machine learning in 2014 from Amirkabir University of Technology, Tehran, Iran. Her research is interdisciplinary, spanning machine learning, computer vision and their application in cognitive neuroscience and medical imaging with a successful track record of collaboration with industry sectors.
Websites: https://scholar.google.com/citations?user=xZIgSigAAAAJ&hl=en
www.uwo.ca/bmi/investigators/yalda-mohsenzadeh.html
Arthur Berrill leads the Royal Bank of Canada Pathfinders team. The Pathfinder team uses research, research tools and partnerships (both internal and external) to define and recommend technology paths of benefit to the bank and in service of the bank’s larger motive of helping clients thrive and communities prosper. In service of this work, Arthur is involved in most of the data science disciplines including location intelligence, data content, artificial intelligence, ontology, graph analytics and climate change studies. Arthur is an RBC Distinguished Technologist.
Industry Panelists
Veronica Marin is the Manager of the Advanced Algorithms group at Thales Canada Transportation Solutions. She leads a team focused on the development of the Vehicle Situational Awareness (VSA) and Next-Generation Positioning (NGP) systems, as part of the Thales Autonomous Platform (TAP). Under her technical leadership, the Advanced Algorithms group develops component- and system-level requirements, conducts safety-focused assessments of state-of-the art ML/AI technologies for rail applications, and implements these technologies in new systems.
Miti Modi is Data Science Manager at Intact Lab, building machine learning products for the insurance industry. Miti completed her Master of Engineering from University of Toronto and has previously built Data Science teams within Retail and Healthcare industries.
Frank Rudzicz is an Associate Scientist, International Centre for Surgical Safety, Li Ka Shing Knowledge Institute, St Michael’s Hospital where he is applying natural language processing and machine learning to various tasks in healthcare, including in detecting dementia from speech. Frank completed his PhD in the Department of Computer Science at the University of Toronto and his Master’s in Electrical and Computer Engineering from McGill University. Frank is also the co-founder and President of WinterLight Labs. As an Assistant Professor (status) at the University of Toronto, he teaches natural language processing and artificial intelligence in clinical medicine.
Deval Pandya is Director of AI Engineering at Vector Institute and one of the 100 Global Future Energy Leaders with the World Energy Council. He is passionate about building Artificial Intelligence and Machine learning systems for expediting energy transition and combating Climate Change. Prior to joining Vector, Deval was leading the Data Science team at Shell focusing on application in New Energies and Asset management. During his career, he has led development of scalable machine learning applications in the domains of nature-based solutions, predictive maintenance, e-mobility, microgrid optimizations and hydrogen value chain. Deval also serves as a Director on the technical steering committee of Moja Global, a not for profit, collaborative project that brings together a community of experts to develop open-source software under Linux Foundation used for country level greenhouse gas accounting from AFOLU sector. Deval is on the task force for Digitalization in Energy at United Nations Economic Commission of Europe (UNECE) and a mentor at Creative Destruction Labs. He enjoys traveling and cooking in his free time.
Research Presentations
Mennatullah Sian is a Postdoctoral researcher with Professor Richard Wildes and Professor Konstantinos Derpanis in York University currently holding a VISTA postdoc fellowship. Siam’s research is focused on fewshot video object segmentation and the interpretability of deep spatiotemporal models. Before that Mennatullah did her PhD in the vision for robotics lab under supervision of Professor Martin Jagersand in May 2021. Her thesis was focused on the intersection between fewshot segmentation and video object segmentation, in which they published in top-tier conferences for computer vision and robotics in both areas. Mennatullah was a member of a team of 4 in the KUKA Innovation Challenge 2018 where they were selected as one of the top 5 finalists. She did multiple internships, in summer 2019 with Huawei HiSilicon working on fewshot object segmentation, in summer 2018 with Nvidia Deep Learning Autonomous Driving team, and in summer 2017 with Valeo Vision Systems on moving object detection in urban scenes. Siam obtained her MSc in Informatics from Nile University Egypt 2013, and got her BSc. in Computing Science from Ainshams university in 2010.
Terrance DeVries is a PhD student working with Prof. Graham Taylor at the University of Guelph. He has completed several internships at FAIR and Apple, working with Laurens van der Maaten, Michal Drozdzal, and Miguel Angel Bautista. Terrance’s research focuses on generative models, particularly for synthesizing 2D images and 3D scenes, with the goal of eventually developing models that have an inherent understanding of the 3D world.”
Collaborative Projects Presenters
Elham Ahmadi is currently a senior data scientist at Royal Bank of Canada. Before joining RBC, Elham did a PhD in Computing Science at University of Alberta with a focus on geospatial data mining. Currently at RBC, she is cooperating in various interesting AI based projects ranging from Weather Impact on Businesses modeling to Computer vision and Aerial Image processing based on Machine learning and Deep learning techniques. In Aerial Image processing, the integration of Spatial databases related indexing techniques and computer vision based deep learning techniques is applied for the sake of extracting imagery features that play important role in boosting the accuracy of deep learning models.
Andrew Alberts is a Data Scientist at Intact Insurance. Andrew focuses on leveraging machine learning to improve the performance of the telematics-driven Usage-Based Insurance (UBI) program. In addition, he spends time understanding how techniques from computer vision can be used to better understand customer driving behaviour and predict risk. He holds a degree in Computational Mathematics from the University of Waterloo.
Raghav Goyal is a PhD student in Computer Vision and Machine Learning at the University of British Columbia (UBC). Prior to this he spent three years at Twenty Billion Neurons GmbH (20bn) in Berlin, Germany as an AI Engineer working on video understanding under the supervision of Roland Memisevic, PhD. He obtained an Integrated M.Tech. (5-year programme) from Indian Institute of Technology (IIT) Delhi in Mathematics and Computing.
John Jewell is an Applied Machine Learning Intern at Vector Insitute and Master of Computer Science candidate at Western University.
Shuja Khalid is a second year PhD student currently supervised by Professor Frank Rudzicz at the University of Toronto. His research interests include 3D visual scene understanding and explainability, using unsupervised training techniques.
Matthew Kowal is a PhD researcher at York University in Toronto, Canada, supervised by Dr. Kosta Derpanis. His research focuses on how we can better understand visual time-series data using neural networks. In particular, He is currently working on designing interpretable deep learning algorithms for video-based computer vision algorithms. He is currently a Scientist in Residence at NextAI, and a Post-Graduate Affiliate and Technical Lead at the Vector Institute.
Jinbiao (Bill) Ning is a Transportation as a System Analyst at Thales Group. He achieved his PhD from McMaster University and joined Thales in 2017. Bill is applying the state-of-the-art computer vision to autonomous train project at Thales.
Kuldeep Panjwani is a Software Engineer at Telus. He has an undergraduate degree from University of Waterloo in the field of Mechatronics Engineering. Kuldeep has a keen interest in deep learning which he has explored though work with EA focusing on reinforcement learning and more recently in computer vision at Telus. He aims to continue finding different ways AI driven algorithms can integrate and improve existing systems.
Gerald Shen is an Associate Applied Machine Learning Specialist on the AI engineering team at Vector Institute.
This event is open to Vector Sponsors, Researchers and Students only. Any registration that is found not to be a Vector Sponsor, Researcher or Student will be asked to provide verification and, if unable to do so, will not be able to attend the event.