Internship Research Projects for Black & Indigenous Students (Summer 2022)

The following page describes the research projects for Summer 2022 internships for Black & Indigenous students. If you wish to apply to these internships, please visit the application page here.

Amir-Massoud Farahmand:  Decision-Aware Model-Based Reinforcement Learning

•Level of Intern: Undergrad, Master’s, PhD

•Project Summary: Model-based Reinforcement Learning (MBRL) is a promising approach to design sample efficient agents. This project explores new approaches to MBRL with a focus on learning models that incorporate the decision problem in the model learning process itself, and planners that are robust to errors in the model.

Gennady Pekhimenko:  An Effective Hardware Utilization Squeezer for Exploring Novel Deep Learning Models 

•Level of Intern: Undergrad, Master’s, PhD

•Project Summary: We are looking for interns to help with the continuous work on Horizontally Fused Training Array (HFTA) among several potential aspects: 1) building a model architecture framework that natively supports HFTA; 2) investigating and improving the performance of horizontally fused operators and their kernels; 3) investigating the applicability of HFTA in less well-established ML/DL domains (e.g., graph neural nets).

Juan Felipe Carrasquilla:  Training Binary Neural Networks Through the Lens of Statistical Mechanics

• Level of Intern: Master’s, PhD

• Project Summary: Binary neural networks (BNN) reduce memory storage, network complexity, and energy consumption by using 1-bit activations and weights. However, their training remains challenging. In this project, we will explore training BNNs using methods inspired by statistical physics and quantum mechanics.

Murat A. Erdogdu:  Non Convex Optimization and Sampling

• Level of Intern: PhD

• Project Summary: Non-convex optimization and sampling are building blocks in modern machine learning due to the structural properties of popular statistical models. Owing to their key role and empirical success in numerous learning tasks, they have been a major focus of recent research. The main purpose of this project is to improve our theoretical understanding on non-convex algorithms, which are ubiquitous in machine learning.

Nicolas Papernot:  Beyond Federation for Collaborative Learning with Privacy

• Level of Intern: Undergrad, Master’s, PhD

• Project Summary: In federated learning (FL), data does not leave personal devices when they are jointly training a machine learning model. Prior work still largely underestimates the vulnerability of FL. In this project, we will introduce active and dishonest attackers to modify weights, consider mitigations and other defences. A significant redesign of FL is required for it to provide any meaningful form of data privacy to users.

Yaoliang Yu:  How to Make Sense of the Probabilistic Predictions of Modern Neural Architectures?

• Level of Intern: Undergrad, Master’s, PhD

• Project Summary: What does it mean when your neural network makes a probabilistic prediction, e.g. an object in a test image is a dog with 75% of chance, a cat with 10% of chance, or a tiger with 15% of chance? We aim to understand and possibly re-calibrate such probabilistic estimates so that they indeed conform to our intuition and carry meaningful information that can be tested and compared.

 

 

 

 

Scroll to Top