Events

Loading Events

All Events

NATURAL LANGUAGE PROCESSING (NLP) SYMPOSIUM

September 15 @ 10:00 am - 1:30 pm

September 15 & 16, 2020 

10:00 am – 1:30 pm EST

 

Register                                                                                                                                                                 

 

The Vector Institute is hosting a Natural Language Processing (NLP) Symposium showcasing the NLP project with academic-industry collaborators to facilitate interaction between our industry sponsors, researchers, students and faculty members.

In June 2019, Vector Institute launched a multi-phase industry-academic collaborative project focusing on recent advances in NLP. Participants replicated a state-of-the-art NLP model called BERT and fine tuned a transfer learning approach to optimize domain-specific tasks in areas such as health, law and finance.

To follow-up on the outcomes of the project, a two-day symposium will be held featuring presentations and hands-on workshops, delivered by the project participants and Vector researchers.

The symposium will support knowledge transfer and provide an exclusive opportunity for Vector’s industry sponsors to engage with talent in the NLP domain.

Workshop Information:

Level of workshops: Beginner/Intermediate

Required skill set: Fundamentals of machine learning and deep learning; knowledge of Language modelling and/or transformers; experience programming in Python and any of the deep learning frameworks (Tensorflow, Pytorch); experience using GPUs for accelerated deep learning training; experience in using jupyter notebook and/or Google Colab.

** Participants must be individuals actively involved in NLP research and/or development*

 

AGENDA – September 15, 2020

10:00 am – 10:00 am Opening Remarks

Garth Gibson, President and CEO, Vector Institute

 

 

10:10 am – 10:40 am

Keynote Presentation

Finding convincing evidence

Existing paradigms of sequence learning and generation in natural language processing focus on generating highly likely text according to human-generated data. In this talk, I will talk about two recent work from my group in which the goal is to select/generate highly convincing pieces of text rather than highly likely ones.

Speaker: Kyunghyun Cho, Associate Professor of Computer Science and Data Science, New York University

 

10:40 am – 11:00 am

Keynote Presentation

Speaker: Jimmy Ba, Assistant Professor, Department of Computer Science, University of Toronto, Machine Learning Group, University of Toronto, Faculty Member, Vector Institute, Canada CIFAR Artificial Intelligence Chair

 

11:00 am – 11:20 am

Keynote Presentation:

Efficient DNN Training at Scale: from Algorithms to Hardware

The recent popularity of deep neural networks (DNNs) has generated a lot of research interest in performing DNN-related computation efficiently. However, the primary focus of systems research is usually quite narrow and limited to (i) inference — i.e. how to efficiently execute already trained models and (ii) image classification networks as the primary benchmark for evaluation. In this talk, we will demonstrate a holistic approach to DNN training acceleration and scalability starting from the algorithm, to software and hardware optimizations, to special development and optimization tools. The first part of the talk will show our radically new approach on how to efficiently scale backpropagation algorithm used in DNN training (BPPSA algorithm, MLSys’20) as well as demonstrate several approaches to deal with one of the major limiting factors in DNN training: limited GPU/accelerator memory capacity (Echo, ISCA’20 and Gist, ISCA’18). The presentation with conclude with the performance and visualization tools we built in the group to understand, visualize, and optimize DNN models, and even predict their performance on different hardware.

Speaker: Gennady Pekhimenko, Assistant Professor, Department of Computer Science, University of Toronto, Faculty Member, Vector Institute, Canada CIFAR Artificial Intelligence Chair

 

11:20 am – 12 noon

Project/Research Presentations

Speakers to be announced

 

12 noon – 12:30 pm

Networking and Poster Session

Poster presenters to be announced

 

12:30 pm – 1:30 pm

Concurrent Workshops

WS1: Performing down-stream NLP tasks with transformers

Training NLP models from scratch requires large amounts of computational resources that may not be financially feasible for most organizations. By leveraging pre-trained models and transfer learning, we can fine-tune NLP models for a specific task at a fraction of the time and resources. In this workshop, we will explore how to use HuggingFace to fine-tune Transformer models to perform specific downstream tasks. The purpose of this workshop is to provide learning through demonstration and hands-on experience.

Facilitators: Nidhi Arora, Data Scientist, Intact

Faiza Khan Khattak, Data Scientist, Manulife

Max Tian, Machine Learning Engineer, Adeptmind

Bio attached as PDF

 

WS2: Distributed multi-node pre-training

In order to significantly reduce the training time when dealing with large datasets we will demonstrate multi-node distributed training; this allows us to efficiently parallelize the training updates of deep neural networks across multiple nodes.

Facilitators: Jacob Lin, Vector Institute, University of Toronto

Gennady Pekhimenko, Assistant Professor, Department of Computer Science, University of Toronto, Faculty Member, Vector Institute, Canada CIFAR Artificial Intelligence Chair

Filippo Pompili, Thomson Reuters

Kuhan Wang, Senior Research Scientist, CIBC

 

1:30 pm

Event to conclude

 

Who should attend:

Individuals who are interested in learning more about natural language processing
Vector sponsors involved in the NLP project
Technical experts from Vector Sponsor companies
Vector PGA, Alumni, Scholarship recipient students interested in NLP
Vector Researchers

Register

Scroll to Top