World-leading AI Trust and Safety Experts Publish Major Paper on Managing AI Risks in the journal Science
May 21, 2024
May 21, 2024
– Showcases Unparalleled AI Trust and Safety Expertise in Canada, including five experts from the Vector Institute
– Provides important and timely recommendations as world policy and business leaders gather in South Korea for the AI Safety Summit
Toronto, May 21 – Five Vector researchers are among the 25 co-authors of a paper calling for new governance and R&D priorities for advanced AI systems. “Managing Extreme AI Risks Amid Rapid Progress” was published in the journal Science. The release coincides with the 2024 South Korea AI Safety Summit, held in Seoul, South Korea on May 21 and 22 which will bring together global business, technology, and policy experts/stakeholders to discuss AI governance frameworks.
Vector Institute Chief Scientific Advisor Geoffrey Hinton, Vector Faculty Members Jeff Clune, Gillian Hadfield, Sheila McIlraith, and Vector Faculty Affiliate Tegan Maharaj are among the consensus paper’s co-authors. In it, they outline the potential risks of advanced AI systems, proposing priorities for AI R&D and governance to prevent social harms, malicious uses, and the loss of human control over AI systems.
“Congratulations to our colleagues for this timely piece published in Science,” says Daniel Roy, Vector Research Director and Faculty Member. “As global leaders and policymakers gather in Seoul for this week’s AI Safety Summit, I have no doubt that this scholarship will provide a firm foundation for productive conversations and their recommendations will offer an excellent starting point for forward progress.”
The paper details AI’s rapidly accelerating pace as companies race to develop AI systems that match or exceed human abilities. Such a breakthrough could produce great benefits for society, but it could also lead to new large-scale risks, including social injustice, criminal activity, surveillance, automated warfare, and loss of human control. If left unchecked, the authors believe that AI advancement could lead to major harm and even extinction.
An earlier draft, titled “Managing AI Risks in an Era of Rapid Progress,” was released on the open-access pre-print site arXiv last November. It generated significant media attention and calls for action.
“Canada, and Vector in particular, has exceptional expertise in AI Trust and Safety,” says Tony Gaffney, President and CEO of the Vector Institute. “We are home to some of the leading researchers in the world who provide clear guidance on this topic in this consensus paper. Their recommendations will be on the table in South Korea when I attend the Global AI Forum. In particular, governance best practices are essential to ensuring that what AI enables can be trusted, safe, and aligned with human values.”
Launched in 2017, the Vector Institute works with industry, institutions, startups, and governments to build AI talent and drive research excellence in AI to develop and sustain AI-based innovation to foster economic growth and improve the lives of Canadians. Vector aims to advance AI research, increase adoption in industry and health through programs for talent, commercialization, and application, and lead Canada towards the responsible use of AI. Programs for industry, led by top AI practitioners, offer foundations for applications in products and processes, company-specific guidance, training for professionals, and connections to workforce-ready talent. Vector is funded by the Province of Ontario, the Government of Canada through the Pan-Canadian AI Strategy, and leading industry sponsors from across multiple sectors of Canadian Industry.
For media inquiries : media@vectorinsitute.ai