AI’s growing adoption in business brings with it a requirement to ensure that deployed models are reliable and safe for end users. This is the domain of Trustworthy AI, a research topic and set of business practices focused on AI governance, risk management, model monitoring, validation, and remediation. It’s a topic that all companies building or adopting AI should become familiar with.

While specific practices around safe and responsible AI adoption may vary from one organization to the next, consistent overall themes and best practices have emerged. We’ve asked leading researchers and professionals in Vector’s community to identify key Trustworthy AI themes and offer guidance to business leaders tasked with implementing AI in their organizations.

Fairness: Guarding against bias

“The facets of Trustworthy AI that come up most are related to fairness,” says Graham Taylor, Research Director at Vector. “We can also tie bias to fairness. When we’re talking about business, we should identify sources of bias that could make AI systems unfair.”

Bias is often cited as a key concern related to responsible AI system management, particularly in cases where AI models make predictions that may affect people’s opportunities or lived experiences. Taylor says, “It could be a decision about a job or admitting someone to graduate school, but it could be more trivial – it could be recommending them a YouTube video or a product to purchase, but all of those decisions – whether they’re big things like career moves or small things like what’s for dinner, they affect us in different ways.”

Foteini Agrafioti, Head of Borealis AI research institute at RBC agrees that when it comes to AI, fairness considerations are paramount. “Bias usually comes from the training data set being skewed toward one particular cluster,” she notes, but can also result from “incorrect design of some algorithms.” Of course, not all algorithms are designed in a way that leads to bias, but regardless of the cause, businesses have an obligation to ensure that AI systems don’t inadvertently discriminate against individuals and groups through their predictions.

Agrafioti notes that further education on fairness can be found on Borealis AI’s RESPECT AI platform. Borealis AI established RESPECT AI to raise awareness about responsible AI adoption and share open-source code, research, tutorials, and other resources for the AI community and business executives looking for practical advice and solutions to enable a more responsible adoption of AI.

Explainability: How transparent should a model be?

Explainability is a topic near and dear to Sheldon Fernandez. Fernandez is the CEO of DarwinAI, a company that provides explainable AI solutions for AI systems that perform visual inspections in manufacturing. He explains that one challenge with some complex AI systems is that “they can essentially be black boxes to their owners. Even the experts that designed these systems sometimes may have very little insight into how the systems reached their decisions.”

Explainability is a prominent topic in debates about AI and trust. Put simply, it revolves around one question: How important is it to understand how and why a model arrives at its predictions?

It’s widely known that AI models may uncover patterns in data too subtle for humans to pick up on. While their predictions may be highly accurate, the reasoning that led to those predictions may not be intuitive or explainable by a system’s users. They may see what a model did, but they can’t understand why. This can set up a dilemma about what’s more conducive to trust:  an opaque model with maximum accuracy or an explainable model that’s less performant? Put more simply, what will serve stakeholders best: accuracy or transparency?

Fernandez offers a guideline. Explainability is key “in contexts where there’s a critical operation being done that’s connected to human welfare,” he says. “If an AI [model] is going to decide how to fly an autonomous helicopter and there are people inside it, of course explainability is going to be important. If it’s going to decide who gets into a certain college or not or whether somebody gets a mortgage, absolutely it is important. Let’s look at another scenario where it determines the books you’re recommended on Amazon when you browse. Yes, it’d be good to understand explainability there, but it’s not imperative there the way it is in those previous cases.”

In contexts involving important health care, employment, justice, and financial decisions, the people affected often rightly expect to understand how and why decisions are made, and business and technical leaders should take appropriate steps to meet this expectation.

Safety:  Protecting the physical

Increasingly, AI systems are interacting with physical environments through applications like industrial robotics or autonomous vehicles. As they do, a new priority for Trustworthy AI comes into focus: safety.

Often underpinning these applications is reinforcement learning, an AI technique that differs from deep learning in that it doesn’t learn through up-front training on datasets. Instead, a reinforcement learning model begins with a goal, and then explores a defined environment, takes goal-oriented actions, and learns from the consequences of those actions, iterating its way toward optimization essentially from scratch.

It’s in climbing this learning curve that hazards can arise. Vector’s Graham Taylor explains: “The space of actions that the robot might take during that learning phase is not constrained by some rule structure that a person wrote and explicitly has constrained. There’s randomness in all reinforcement learning processes, and safety issues arise from effectively taking random actions. It could be safety in the sense of robotics learning to do some task, taking some action, and interfering with someone. But it could also just be damage to the machine itself ― like the robot takes certain actions that destroys itself. Nobody deploying these systems wants to destroy very expensive infrastructure as it goes through the learning process.”

Where AI systems are interacting with physical environments or operating with unconstrained randomness, mitigations against safety risks should be part of governance practices.

Privacy:  Anonymization may not be enough

Privacy and trust often go hand in hand, and AI can introduce new challenges here.

“We used to think that we can solve for privacy if we take a data set and just anonymize it – just remove the names and direct identifiers for the involved individuals,” says Agrafioti. “However, for modern machine learning and deep learning systems in particular, where the large volumes of data are used for training, models can uncover a lot of detail about a person – or even identify them outright – even when features directly linked to identity are removed. If you’ve got a lot of information about an individual, you could reverse-engineer and identify who that is, even though you don’t have the name.”

It’s incumbent upon company leaders to ensure that their data management and privacy practices keep up with evolving AI technology, and that business leaders and technologists work together to ensure proper practices, policies, and tooling are in place to keep up with risks and mitigate them. Some highly regulated industries, like finance, have a set of best practices for privacy, risk, and data management, but not all industries have had to operate under the same level of scrutiny. Globally, governments are paying attention to these issues and are rolling out legislation and policies to address these gaps.

Privacy considerations may differ between applications, and different methodologies – including differential privacy, federated learning, homomorphic encryption, or synthetic data generation – can help address them.

Prioritize values and governance when applying AI

A key part of Trustworthy AI involves establishing an effective AI-specific governance framework with elements like fairness, explainability, and safety properly integrated.

According to Deval Pandya, Director of AI Engineering at the Vector Institute, designing this governance framework begins with values. Pandya says, “Trustworthy AI has two major components. The first is the ethical requirements. This manifests as the question: What are the human values and corporate values? The second piece is the technical requirement. How [do those values] manifest in the technology that we’re using?” These values – like avoiding discrimination, prioritizing transparency, and protecting privacy – provide constraints that should shape how organizations make decisions about their AI use.

“Most businesses have some way of validating and stress testing [traditional, non-AI] models before they put them into production,” Agrafioti says. “Modern machine learning systems have really challenged how organizations have been doing this traditionally. There’s a new set of considerations one would have to explore when looking to govern and validate machine learning models.”

One example of a new consideration is model drift, which refers to the degradation of model performance when data distribution patterns change from the historical patterns used for training. Agrafioti explains, “[Models] change over time – they’re not static. A model can completely change its behaviour from the day you put it into practice. One of the challenges of AI is you’ve got to have continuous governance and tests of it.” Business and technology leaders must be aware of these new issues to ensure reliable and trustworthy AI use over the long term.

These considerations also apply to companies that are using AI products from third-party vendors. To cover their bases regarding Trustworthy AI when procuring products, organizations should communicate their priorities to vendors and take steps to validate that those vendors are living up to them. Pandya says, “You have to share what the principles of Trustworthy AI are for your organization. Then, there are certain things that are easy to evaluate, things that can be assessed very objectively, like privacy protocols, and others, like fairness and bias, [which] will need much more though and effort to evaluate and implement.”

Agrafioti agrees: “[Companies adopting AI] should 100% worry about any off-the-shelf black box product. Do your own deep technical due diligence on these products. Talk to your vendor about this. Ensure that the models have been validated on really large datasets in an environment like your own, because the domains can be so different – models can misbehave in one and then behave really well in another. You want to make sure that in your space, it works as anticipated.”

Governance frameworks and proper risk management should also address the issue of accountability for when things don’t go as planned. Taylor describes the problem: “I’ve often heard machine learning systems be criticized for being difficult to account for when things wrong. I know that legal experts who look at AI worry about these sorts of things. There’s a catastrophe and it involves a machine learning system. Where do you put the blame? The person collecting data? The person training the model? The person who wrote the open source code used to train the model?”

These considerations and answers to these questions need to be contemplated prior to deployment.

Collaboration: The antidote to uncertainty

One way that business leaders can get up to speed on Trustworthy AI and governance practices is through collaboration with others in the AI ecosystem. When it comes to tackling these new challenges, “it will take a village,” says Agrafioti. “Companies should collaborate since even a single instance of AI use that harms users can have an adverse impact on the industry.”

In Ontario, opportunities for collaboration abound. RBC Borealis’ RESPECT AI platform, mentioned earlier, freely shares RBC’s expertise on explainability, fairness, robustness, privacy, and governance with the broader business community through downloadable code, toolboxes, and articles.

The Vector Institute also makes collaboration a priority. Vector operates industry projects – like the Trustworthy AI and Accelerate AI projects – which bring its industry sponsors together with Vector researchers to tackle important application-related challenges, including those related to AI governance. For growing companies, Vector offers the FastLane program, which delivers insights from Vector projects to small and medium businesses that are using AI today or want to take the step from traditional data analytics to AI in the near future.

Taylor explains, “Vector wants to be a neutral agent that gives unbiased expertise and support to these organizations ― particularly smaller ones involved in the FastLane program ― and put them in touch with specific experts that can assist them with audits or assessing the legitimacy of code bases or just helping them make decisions.”

The transformative use cases, productivity gains, and new efficiency that AI applications can deliver are exciting,, but user welfare and trust are crucial. Whether big or small, building or adopting, companies using AI need to recognize and govern the novel issues related to trusted and ethical AI – especially as the field evolves.

Engaging with the AI community is one of the best ways to do this, and to ensure the promise of AI is realized, fully and responsibly.

To learn more about Vector’s work with industry, including on Trustworthy AI, visit Vector’s Industry Partner page here.

Scroll to Top