AI Policy, regulation and thought leadership

Building effective AI policy and regulation

Vector is a leading voice in shaping the global conversation around AI governance and ethics. Our multi-stakeholder collaboration initiatives amplify voices across academia, industry, and government, ensuring that Canada sets a global standard for safe and inclusive AI that drives innovation while promoting equity, accountability, and human well-being. Access our insights, commentary, and resources on building effective policy and regulation:

Insights

Vector participates in numerous multi-stakeholder working groups with WEF, OECD, Government of Ontario and Government of Canada to build robust AI governance mechanisms.

Vector pioneered AI Trust and Safety Principles

At the heart of Canada’s AI community, Vector Institute has developed six trust and safety principles for AI released in June 2023. These foundational principles aim to guide global organizations in creating responsible AI policies, affirming Canada’s commitment to ethical AI leadership.

World Economic Forum – Guiding Principles for ethical AI

Vector collaborates with the WEF through their AI Governance working group (represented by Tony Gaffney, President and CEO of Vector) and the AI Safety and Technology Technical working group (represented by Jeff Clune, Faculty member at Vector). Through these groups, Vector worked with collaborators to deliver guidelines and frameworks designed to steer AI development toward beneficial societal impacts.

In addition to Vector’s formal representation, individual members of the Vector community, such as Gillian Hadfield, Faculty member, also contribute her expertise to the AI Governance Working Group.

The Organisation for Economic Co-operation and Development (OECD)

AI Data and Privacy Working Group

Nicolas Papernot is a contributor and represents Vector to support dialogues with OECD for policy synergies in AI, data, and privacy. The rapid evolution of AI necessitates increased international cooperation in data governance and privacy to ensure that opportunities and challenges are managed effectively across global jurisdictions. https://oecd.ai/en/wonk/expert-group-data-privacy

Expert Group on AI Futures

Graham Taylor is a contributor to the Expert Group on AI Futures, representing Vector, and through this work provides insights to OECD on the future of AI and equipping governments with the knowledge and tools necessary to develop forward-looking AI policies.

Regulation and standards

Vector researchers among leading scholars who authored consensus paper on the risks of advanced AI systems along with an urgent call to action to advance AI safety research and effective government oversight.

Strategic Framework for AI Safety: IDAIS-Beijing (2024)

On March 10 – 11th, 2024, GLobal AI leaders met at IDAIS-Beijing to strategize on AI safety and drafted a consensus on preventing high-risk AI development. Chief Scientific Advisor Geoffrey Hinton and Vector Faculty member Gillian Hadfield were part of these dialogues and contributed to the Beijing statement.

Vector researchers at SRI are working with Certification Working Group (CWG) to chart the future of AI certification (2024)

The CWG’s latest report in collaboration with SRI and several of Vector’s researchers lays down actionable recommendations for creating a robust certification ecosystem that ensures AI systems are responsible and ethical, advocating for clear governmental objectives and collaborative efforts to establish standards and foster market demand for certified AI technologies. This initiative reflects a critical step towards engendering trust in AI through innovation, transparency, and regulated conformity to societal values. Read the report here.

Managing AI Risks (2023)

Ahead of the UK AI Safety Summit (November 2023), Vector researchers were a significant part of creating a consensus paper (October 2023) outlining the risks from upcoming, advanced AI systems which informed the dialogues at the UK Safety Summit called: Managing AI Risks in an Era of Rapid Progress:

RI and Vector Institute consult on Ontario’s Trustworthy Artificial Intelligence Framework (2021)

As the use and complexity of AI outpaces global regulatory efforts, Ontario launched efforts toward an adaptable, rights-based AI framework, engaging experts from Vector; Gillian Hadfield and former CEO of Vector, Garth Gibson. Vector provided strategies in governance for the government that foster innovation and trust through continuous learning – Read the response and framework here.

Governance

In collaboration with industry and government, Vector contributes to shaping principles and standards for the safe and responsible use of AI. Vector sits on a number of working groups in this area.

A man looks at a white board with red formulas on it

How to safely implement AI systems

Vector logo. Trustworthy AI Themes For Business From The Vector Community

Trustworthy AI Themes for Business from the Vector Community

Generative AI for Enterprise: Risks and Opportunities

Azra Dhalla headshot

Safe AI implementation in health: Why the right approach matters

Vector’s Trustworthy AI series:

Sign up Vector’s Partner’s Portal

Did you know that Vector’s partners portal is a central hub for our industry partners to access premium content and exclusive resources that provide you with actionable information and tools to enhance your AI knowledge and skills? 

Part of our community?