Public statement from the Vector Institute

May 3, 2023

Earlier this week, the Vector Institute’s Chief Scientific Advisor, Dr. Geoffrey Hinton, expressed concern about the misuse of generative AI systems in the New York Times.

Vector’s vision and mission do not contradict Dr. Hinton’s cautions. We strongly believe that his concerns further legitimize Vector’s mission to enable researchers who are advancing responsible development of AI, and support Canadian industry and public institutions as they acquire the people, skills, and resources to be the best at the responsible use of AI for the benefit of society.

To ignore the risks associated with AI or to pause all AI research is unhelpful. As the pace of AI accelerates, learning how to deploy it responsibly and understand its full potential is the best way to guard against its misuse. 

Now is also time for the world’s governments and institutions to enact trust principles to guide the use of AI responsibly. 

Dr. Hinton continues to serve as Vector’s Chief Scientific Advisor on a voluntary basis. Vector strongly supports Dr. Hinton and all members of the Vector community in their exploration and understanding of AI to the benefit of Ontario and Canada.

For more information on Vector’s work in responsible development and adoption of AI, please see related articles below or contact us at media@vectorinstitute.ai 

Related:

Research
Trustworthy AI

World-leading AI Trust and Safety Experts Publish Major Paper on Managing AI Risks in the journal Science

Standardized protocols are key to the responsible deployment of language models

The known unknowns: Vector researcher Geoff Pleiss digs deep into uncertainty to make ML models more accurate