Defining AI’s guardrails: a PwC-Vector Fireside Chat on Responsible AI

August 6, 2021

August 4, 2021

As artificial intelligence (AI) adoption in business and industry rises, questions about risks and AI-specific governance are coming to the fore – and answering them is proving to be a tall order. Indeed, recent 2021 insights showed that only 12% of companies “had managed to fully embed AI risk-management and controls and automate them sufficiently to achieve scale.” [1]

On July 15th, Annie Veillet, a Partner in PwC Canada’s One Analytics consulting practice, joined Vector Institute VP of AI Engineering & CIO and Schwartz Reisman Institute Engineering Lead, Ron Bodkin, in a fireside chat to share their expertise on Responsible AI. This is a concept that Veillet describes as “the guardrails” meant to ensure AI systems do “what they’re supposed to do, act ethically, and act fairly.” Moderated by Vector’s Director of Technical Education, Shingai Manjengwa, Veillet and Bodkin addressed why Responsible AI is a crucial topic for companies using AI systems today and how to start thinking about the technology’s unique risks and challenges in the absence of widely-accepted standards.

What is Responsible AI?

Responsible AI refers to a set of principles and practices that guide the ethical development and use of AI systems. PwC’s A practical guide to Responsible Artificial Intelligence (AI) provides a succinct description:

“[O]rganizations need to make sure that their use of AI fulfills a number of criteria. First, that it’s ethically sound and complies with regulations in all respects; second, that it’s underpinned by a robust foundation of end-to-end governance; and third, that it’s supported by strong performance pillars addressing bias and fairness, interpretability and explainability, and robustness and security.”[2]

Each of these five key dimensions merits a brief explanation:

  • Ethics and regulations. Organizations should ensure that they develop and use AI in a way that complies with defined government laws and standards, and that is acceptable according to the values of the organization, industry in which they operate, and broader society.
  • Model development and use carry risks for organizations and their stakeholders. A framework of controls designed to ensure that organizations comply with regulations, assign accountability, and safeguard against adverse outcomes can mitigate these risks. Due to AI’s novelty, complexity, and unique characteristics, AI model governance may require different controls from those used to govern traditional software models, including controls to monitor the lifecycle of AI models as their execution environment changes.
  • Bias and fairness . AI systems that use biased datasets may produce biased predictions. Without technical safeguards or humans-in-the-loop with a duty to catch and correct for this, these predictions can inform discriminatory decisions that unfairly harm people. Determining what is ‘fair’ is no simple task. There are many social and technical definitions, some of which conflict with one another.
  • Interpretability and explainability. Both terms relate to degrees of understanding and transparency into how an AI system operates. In basic terms, interpretability refers to the degree to which we can understand how a system arrives at its prediction. Explainability refers to the degree to which we can understand why a system arrives at its prediction and in the manner it does. Depending on the use case, interpretability and explainability can play key roles in maintaining trust and safety.
  • Robustness and security. Robustness is the degree to which an AI system maintains its performance when using data that is different from its training dataset. A divergence from expected performance should be investigated, as it may signal human error, a malicious attack, or unmodeled aspects of the environment.

An underestimated cause and consequence of AI risk

Veillet and Bodkin opened by describing a major risk organizations should keep in mind when implementing AI systems, along with one of the major factors they’ve repeatedly seen exacerbate risk.

AI and reputational risk. “We’ve all seen stories in the media of AI gone wrong,” Veillet said. When novel AI systems are used to support decisions that impact the livelihood or well-being of people – decisions about loan qualification, health insurance rates, or hiring – unintended and unaccounted for biases or other negative outcomes can cause harm to customers, patients, employees, and other stakeholders.

An organization employing these systems can also suffer significant damage. Veillet explained, “If an AI system is not guard-railed, if it acts in a rogue manner, it has consequences not only on the public and that organization’s customers, but also on the organization’s reputation.” This can increase an organization’s exposure to liability, a loss of brand and market value, and strong regulatory responses.

Unclear accountability. Contributing to that risk is a gap at organizations regarding who’s responsible for AI system outcomes. The worst case scenario is all too common,” Bodkin said. “You have no one at the wheel of this car that’s zooming ahead on the road.”

Bodkin described how this gap develops, saying: “Data scientists, engineers, and development teams feel like they’re executing a business objective that they’ve been told, and are optimizing for the objectives they have. And all too often you see business leaders that don’t really understand what’s going on with AI, and don’t feel like they know how to influence the outcomes or give concrete direction.” The result, he said, is that no one feels responsible for working out the risks, considering them against the benefits of using the system, and developing mitigation strategies.

Ownership of AI systems risks can be complicated, according to Veillet, but one thing is certain: it doesn’t reside solely with technical teams. She explains, “I don’t think it’s a straight answer about who’s responsible, but I’m going to say that it’s not solely the data scientists that built the machine.”

AI system use should be moored to an organization’s broader strategy, industry standards related to risk and ethics, and compliance practices, with input from relevant professionals for each – but often isn’t. “We’re seeing gaps tying AI controls back to things like an organization’s strategy,” Veillet said. “So how are we sure that this machine is acting and behaving according to this organization’s strategy? Is it following the rules and regulations of that organization?”  Without explicit accountability, organizations are leaving AI risk up to chance – to the potential detriment of end-users and the organization itself.

Where organizations can start with Responsible AI

With reputational risk and accountability issues noted, Veillet and Bodkin shared principles for thinking about Responsible AI that apply generally across industries and sectors.

Top-down sponsorship and bottom-up education. Bodkin said that responsibility for AI outcomes should exist throughout an organization, with “top-down sponsorship and bottom-up education.” Determining which governance mechanisms are appropriate requires clear principles, and Bodkin said, “Articulation of values and governance has to start from the top.”

Veillet agreed: “It goes top down and bottom up. We’re seeing lots of shared work being done on shaping some internal policies at organizations and really coming up with best practices and guidelines when it comes to the use of AI.” For roles below the executive, Veillet explained that organizations should avoid consigning accountability to some specialized internal group, and that ensuring AI systems are employed responsibly should be “a citizen-led effort.”

“You don’t need to be a data scientist to understand important Responsible AI concepts” and to play a role in applying them, Veillet said. Encouraging education about AI systems throughout the organization should be a priority.

Ongoing monitoring. “Part of governance and control when we’re working with organizations is maintenance,” Veillet said. An important aspect of AI system maintenance is continuous monitoring for unintended consequences. In contrast to conventional software development principles that emphasize one-time up-front system reviews, organizations should consistently monitor AI systems as they’re used, recognizing that the novelty and complexity of these systems means that not all outcomes can be anticipated. Furthermore, in cases where trade-offs may need to be made – for instance, between a system’s performance and level of explainability – determining the right balance between risk and benefit may be difficult to do up front. It may be that only through continued monitoring and learning can an optimal compromise be developed.

Additionally, continued monitoring can help ensure that a system designed in today’s context continues to operate as expected in the future. Veillet said that organizations using AI should ask: “How are we controlling and checking that the machine hasn’t drifted – that it’s still doing what it’s meant to be doing?” When circumstances in an economy or population change, data distributions can change along with them in a phenomenon called dataset shift, and this can affect the predictive quality of an AI system. Monitoring for it and correcting it when it occurs should be regular parts of Responsible AI practices.

Measuring multiple metrics. In order to fill out the picture about AI’s impacts and pick up on unanticipated consequences, monitoring should include more than the metric being optimized for. Bodkin said, “Even things that seem benign and that are certainly not intended to behave a certain way can have meaningful consequences.” He illustrated with a seemingly low-stakes application: optimizing clicks for serving media. “It turns out that at scale, the algorithms predicting clicks in large media platforms have had incredible impact,” he said, contributing to unexpected rises in misinformation, polarization, and addiction.

He continued: “As organizations, I think we have to do better at thinking ahead about what the risks are and how we monitor for things we didn’t expect to mitigate when things start to go awry.”

Looking forward: On effective regulation and AI optimism

Veillet and Bodkin also shared a vision for what an effective government approach to AI regulation might look like ― one that can harness the strengths of the private sector to keep pace with new, complex, and rapidly-changing AI models and use cases. Bodkin envisioned a way to integrate private sector mechanisms for “doing certification and audit, so that you can have standards set by the government, but then have real competition and innovation among third parties who can help certify and guarantee compliance.” This regulatory arrangement would resemble that of public market financial reporting, where accountable private sector firms audit public company financials and certify that they satisfy government standards. Also, he explained, it would lower the costs of compliance, support continued monitoring, and enable organizations to adapt as new challenges emerge.

Finally, after a discussion focusing on risk, the speakers closed on a note of optimism that emphasized the exciting potential AI has to benefit humanity. Veillet said, “Using AI to address ESG concerns – environmental, societal, and governance – is a huge opportunity.” Bodkin added, “We’re seeing AI being used to address climate change and to provide better materials and understanding of climate change problems. And we’re seeing it used to continue to delight customers and provide better experiences in a range of industries.”

An important driver of these advances is collaboration between research institutions and the private sector. Indeed, PwC’s sponsorship of the Vector Institute is an example of this, enabling Vector’s Industry Innovation team to share research and lead hands-on industry projects such as Vector’s Trustworthy AI project and the Accelerating AI project, both of which explore responsible ways to support innovation while upholding important values and preserving privacy. In turn, experiences from these projects can help PwC refine the delivery of their Responsible AI Toolkit, the firm’s suite of customizable frameworks, tools and processes designed to support effective, safe, and beneficial use of AI in business.

“These collaborations exploring how to unlock value in a responsible way have become high on the agenda,” Bodkin said. “And I think one of the strengths of having close relationships between research and the private sector is that we can learn from each other and advance in a practical way.”

Watch the recording of the Fireside Chat on Responsible AI here.

[1] PwC Global. Jumping onto the right side of the AI divide. February 22, 2021.

[2] PwC. A practical guide to Responsible Artificial Intelligence (AI). 2019.[/vc_column_text][/vc_column][/vc_row]

Related:

A man looks at a white board with red formulas on it
Insights
Trustworthy AI

How to safely implement AI systems

Keith Strier and Tony Gaffney speak on stage at the Remarkable 2024 conference.
Insights

Remarkable 2024 spotlights Canada’s flourishing ecosystem

Merck and Vector logos
News
Partnership

Merck Canada announces collaboration with Vector Institute