Co-led by Vector Research Director Richard Zemel, ACM FAccT Conference emphasizes the importance of fairness, accountability, and transparency in AI

February 26, 2021

2021 Blog Research 2021 Trustworthy AI

February 26, 2021

Vector researchers are busy readying themselves for the 2021 edition of the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), which kicks off March 3 and runs through March 10. The cross-disciplinary conference brings together leading researchers and practitioners concerned about fairness, accountability, and transparency in socio-technical systems. Originally slated to be held in Toronto, this year’s event is being held virtually. 

Although only 4 years old, FAccT is fast becoming the leading conference in this very important area. Last year over 600 researchers, policymakers, and practitioners attended the conference, a number that is expected to double this year. 

“Bringing together the world’s experts and fostering open dialogue on this topic is key to ensuring checks and balances are in place so that AI models don’t reinforce existing biases,” says Richard Zemel, Vector’s Research Director, who specializes in fairness in machine learning and automated decisions and is one of the conference’s General Co-chairs. 

Businesses and institutions are turning to AI to help automate decision-making in everything from hiring practices to medical diagnosis. Yet, in leveraging the technology to make processes more efficient, they run the risk of inadvertently introducing new forms of discrimination into these practices. “As AI and other technologies become increasingly embedded into our day to day lives it’s critical for businesses and the good of society as a whole that we ensure our models don’t inadvertently incorporate latent stereotypes and prejudices,” says Zemel.

The conference includes keynote speakers and tutorials. It also includes sessions where researchers and practitioners from all academic disciplines and different communities of practice including journalists, activists, artists, and educators, are invited to offer creative critiques of the field of fairness, transparency, and accountability. 

Among the work featured at the conference are two papers by Vector researchers.  

Can You Fake It Until You Make It?: Impacts of Differentially Private Synthetic Data on Downstream Classification Fairness
Victoria Cheng, Vinith M. Suriyakumar, Natalie Dullerud, Shalmali Joshi, Marzyeh Ghassemi
Presenting Tuesday, March 9th, 5 pm – 7 pm EST; Live Q&A at 6 pm EST

Recent adoption of machine learning models in high-risk settings such as medicine has increased demand for developments in privacy and fairness. Rebalancing skewed datasets using synthetic data has shown potential to mitigate disparate impact on minoritized subgroups, but the generative models used to create this data are subject to privacy attacks. Differentially private generative models are considered a potential solution for improving class imbalance while maintaining privacy. However, our evaluation demonstrates that existing differentially private generative adversarial networks cannot simultaneously maintain utility, privacy, and fairness. This friction directly translates into loss of performance and representation in real-life settings.
Watch video >

Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings
Vinith M. Suriyakumar, Nicolas Papernot, Anna Goldenberg, Marzyeh Ghassemi
Presenting Tuesday, March 9th, 5 pm – 7 pm EST; Live Q&A at 6 pm EST

Summary: Machine learning models in health care require strong privacy guarantees to protect patient data. The current standard of anonymization often fails in practice. In such settings, differentially private learning provides a general-purpose approach to learn models with privacy guarantees. We study the effects of differentially private learning in health care. We investigate the tradeoffs between privacy, accuracy, fairness, and data that changes over time. Our results highlight steep tradeoffs between privacy and utility and models whose predictions are disproportionately influenced by large demographic groups in the training data. We discuss the costs and benefits of differentially private learning in health care.
Watch video >

Related:

2024
Research
Research 2024
Trustworthy AI

World-leading AI Trust and Safety Experts Publish Major Paper on Managing AI Risks in the journal Science

A man looks at a white board with red formulas on it
2024
Insights
Trustworthy AI

How to safely implement AI systems

Headshot of Vector Faculty Member Xi He.
2024
Insights
Trustworthy AI

How Vector Researcher Xi He uses differential privacy to help keep data private