Bill C-27 is a call to action
January 9, 2023
January 9, 2023
January 9, 2023
Industry participants’ perspectives in this piece were gathered during Vector Institute’s Managing AI Risk Thought Leadership project in the fall of 2022.
Canada’s first proposed legislation for the commercial use of AI, the Artificial Intelligence and Data Act (AIDA), was tabled in June 2022. As part of Bill C-27, the AIDA was packaged with two pieces of privacy legislation: the Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Protection Tribunal Act (PIDPTA).
The AIDA follows other recent proposals for AI regulation, including the European Union’s Artificial Intelligence Act (2021) and the United States’ Algorithmic Accountability Act of 2022. However, while these two acts are comprehensive, Canada’s takes a more schematic approach, referring to further regulations that are yet to be developed.
The goal of the AIDA is evidently to establish a chain of responsibility for AI systems–particularly “high impact” systems, or those that can potentially cause considerable harm to consumers. Individuals and organizations deemed responsible for AI systems will be required to anonymize data, assess and mitigate risks, publish information about high-impact systems, and will be subject to penalties for not following these requirements.
When the AIDA appeared, it garnered a mixed reaction from Canada’s AI community. Driving some of the concern is that the Bill proposes fines for non-compliance but does not clearly articulate what compliance would mean or require, deferring to further regulations yet to be developed.
To explore the AIDA’s approach and unpack these concerns, Vector made Bill C-27 a topic in our fall 2022 Managing AI Risk Thought Leadership project. This project brought together contributors to Canada’s AI ecosystem from several sectors, including representatives from regulators, governments, and companies both large and small. During discussions on the AIDA, participants voiced their apprehension and views on the opportunities that the Bill affords, sharing a productive array of ideas on how to detail and improve this draft legislation in ways that would benefit all Canadians.
Many participants in Vector’s Managing AI Risk project held the view that the AIDA’s ambiguity and proposed penalties introduced considerable risks to anyone involved in the development, distribution, or use of AI systems, potentially putting a chill on AI innovation in Canada. AI in the Bill is framed as a risk with no corresponding recognition of its opportunity. Knowing there will be penalties, but not knowing exactly what the rules will be, was generally understood to make investing further resources in AI a difficult proposition. Canada could be seen as making a statement that AI has become too risky to pursue.
Among the top concerns Vector heard with wording in the Bill was that the definition of an “AI system” – the very thing the AIDA sets out to regulate – is unclear. Similarly, there were concerns that the Bill does not provide criteria specifying the nature of “high-impact” AI systems that carry the greatest risks of harm.
The AIDA’s definition of “person responsible” for AI systems – organizations and individuals obligated to follow the regulations and subject to penalties – also set off alarm bells for participants. Among the concerns was that the broad definition of “person responsible” appears to include every company and individual who is involved in any way in the collection of data, coding, engineering, training, distributing, and using AI systems. By creating a vast network of potential persons responsible, the Bill leaves unclear where the responsibilities of one person ends and those of another begins. Participants also noted that the AIDA does not specify the conditions in which individuals working at organizations (as opposed to the organizations themselves) may be held responsible for harms related to the use of AI systems.
Other key requirements that participants noted need more clarification include assessing risk, or the likelihood of a system causing material harm; instituting fairness (i.e., the avoidance of biased outputs); establishing practical standards for data anonymization; and specifying the AIDA’s jurisdictional limit to international and interprovincial commercial activity.
Beyond the AIDA’s lack of clarity, participants in Vector’s Managing AI Risk project expressed unease or lack of understanding about how federal AI regulations will be administered. For example, the Bill proposes to give a single federal authority the power to both determine noncompliance with regulations and also issue financial penalties. Some participants took the view that separating these two functions into two bodies might lead to a more accountable regulatory environment.
Lastly, although the AIDA proposes to scale penalties to be appropriate for the size of an organization, some participants worried that the risk of penalties will make it significantly harder for startups and smaller companies to attract investors, while creating new advantages for large, international technology firms that can afford to pay Canadian fines or challenge them with litigation. New firms may choose to incorporate in another jurisdiction entirely, where the rules are clearer, as opposed to taking on the risk of operating without knowing what future regulations will impose. The burden of legislated reporting mechanisms also appeared to participants to fall more heavily on smaller organizations than on large, international firms that are equipped to handle such requirements.
One way forward that some participants discussed was to ask the government to separate the AIDA from Bill C-27. This would allow CPPA and PIDPTA – two important updates to Canada’s privacy legislation – to pass while the AIDA is revised with the same level of care that shaped the rest of the Bill.
Regardless of whether the AIDA remains part of Bill C-27, stakeholders in Canada’s AI ecosystem demonstrated that they have much to contribute to conversations about next steps. Participants recognized a general interest among Canadian AI experts and practitioners to provide the federal government with feedback on the AIDA and specific ideas about AI regulations that will serve Canada’s best interests.
One such recommendation from a Vector industry sponsor is to align the AIDA’s definition of AI systems with those embraced by the Organization for Economic Co-operation and Development (OECD) and the European Union’s AI Act. Another recommendation was to follow the EU AI Act’s approach to distinguishing responsibilities of different stakeholders and specifying the criteria for high-risk systems. This harmonization of definitions would provide Canada’s AI ecosystem with considerable legal clarity and compatibility while also paving the way to greater collaboration and trade with EU partners.
Some participants also recommended adjusting the regulations for startups and other small- and medium-sized businesses to make it easier for them to achieve the AIDA’s objectives, while others advocated for a careful assessment of how the Bill fits within the larger Canadian and international regulatory environment.
From the perspective of the project participants, addressing these and related issues in the current draft AIDA is vitally important for developing effective and practical AI regulations. Participants saw an opportunity for Canada’s AI ecosystem to step up, get involved, create standards, and make important conversations happen within and across industries.
Join our mailing list to participate in the next Vector Thought Leadership project and discover the benefits of participating in conversations like this as an industry sponsor.