ChainML, Private AI, and Geoffrey Hinton underscore the importance of responsible AI development and governance at Collision 2024

June 20, 2024

2024 News

By Natalie Richard

AI was front and centre at the Collision Conference this week. “You’re probably done hearing about it. Stop talking about AI!” joked Shingai Manjengwa, Head of AI Education at ChainML as she stepped onto the Growth Summit stage at Collision. While talk of AI’s immense potential was everywhere, so was talk about building safe AI. “Some of you are excited,” Manjengwa said. “But some of you are afraid. AI has a trust issue.” 

As AI becomes more capable with “agents” working together, it raises key questions around fairness, bias, ownership, governance, and accountability. These questions are exactly what Manjengwa and Vector Institute FastLane company ChainML are tackling by looking at a different technology. “We’ve started by exploring blockchain technology to help manage artificial intelligence agents in a fair and accountable way,” Manjengwa told the standing-room-only crowd. 

The company is leveraging blockchain, the technology behind cryptocurrencies like Bitcoin, to create a system for keeping track of what AI programs do, and a way for multiple parties to agree on how AI should be governed and managed. It also allows them to explore smart contracts — self-executing contracts with the terms directly written into code — and cryptography — a technique for securing communication — to better trace how an AI makes decisions.

Blob

“You need to be intentional about exactly what data you’re using, who will have access to it, and when.”

Patricia Thaine

Co-founder and CEO, Private AI

Patricia Thaine, Co-founder and CEO of Vector FastLane company Private AI also addressed AI trust and safety, specifically privacy concerns with large language models. 

Thaine emphasized the importance of carefully considering privacy principles when collecting and using data for AI training. “You need to be intentional about exactly what data you’re using, who will have access to it, and when,” she explained.

She then introduced the audience to the idea of removing personal information before it reaches third-party language model providers. To mitigate the potential risks of accessing personal information in AI systems she suggested comprehensive reviews processed through AI systems, and controlling which teams have access to different types of data for training. 

Creating a tool to review multiple languages and file types to meet requirements for compliance with numerous data protection regulations worldwide is a complex challenge. To address this, Thaine and Private AI developed PrivateGPT, a tool identifying and removing personal information across text, audio, images, and documents. Their safe AI innovation, called PrivateGPT, has already achieved HIPAA-compliant output with recognition from organizations like the World Economic Forum and Gartner.

AI safety has also been top of mind for Vector Chief Scientific Advisor Geoffrey Hinton, who brought the topic to the centre stage for his conversation with political commentator Stephen Marche. Their talk in front of a packed audience raised the importance of AI governance and developing and deploying trustworthy AI to mitigate potential harms.

Like many members of Vector’s community, Manjengwa, Thaine, and Hinton are working to ensure trust and safety in AI is top of mind for Collision attendees. Only then can we reach AI’s true potential while mitigating risks. 

Learn how the Vector Institute is driving safe AI development and deployment 

Related:

2024
Research
Research 2024

My Visiting Researcher Term at Vector Institute

Women write on a white board. There is a man to he left looking at the board.
2024
Research
Research 2024

Vector researchers presenting more than 98 papers at NeurIPS 2024

2024
Research
Research 2024

Unlocking the Potential of Prompt-Tuning in Federated Learning