How to safely implement AI systems
March 20, 2024
March 20, 2024
By John Knechtel
With special thanks to EY Canada, who provided the event space to Vector to host this event.
Harnessing AI to established management systems will be the linchpin to success for Canadian enterprises aspiring to global leadership in AI-driven growth.
That was the consensus view of participants at a series of Vector Institute workshops where financial services professionals discussed routes to the safe implementation of AI and machine learning (ML) systems. Since 2022, the Vector lnstitute’s Managing Al Risk Thought Leadership Project has facilitated a series of in-depth discussions of current challenges for the private sector adoption of Al and ML models, and hepled developed Vector’s AI Trust and Safety Principles.
So how are Canada’s financial services professionals — where the industry is internationally esteemed for its sound risk management — thinking about safe AI implementation? Their outlook was, in a word, pragmatic. Participants took the view that precisely because AI is so powerful, adopting it calls for the adaptation, not reinvention, of normal operations.
Several people pointed out that we have seen new tech, like PCs, spreadsheets and the Internet, transform business before. “This is not a new problem,” said one participant.
What is new is the speed and ease with which AI is spreading itself “everywhere, all at once,” as one attendee put it. “It’s one of those things where everybody has a chance to go ahead and touch it. There are no barriers, you can just start,” said another. Participants noted that this dynamic — a quickly spreading and invisibly integrating technology — is shaping how businesses will relate to AI.
Up to this point, AI has been the domain of technical experts, people that understand machine learning like computer scientists and mathematicians. It was a very narrow domain. Now, with generative AI, “you’re putting it in the desktop of absolutely every employee,” said one participant. “In that way it’s similar to the transformation that you had with Excel for example, where all of a sudden, you had employees that had no background in computer science at all that now are able to code.”
The big break in the centralized-IT paradigm came when managers realized that people were doing macros on Excel that actually were useful. When Excel was being broadly adopted in the 90s, one Canadian bank discovered that — without anyone at head office knowing it — an international division had quietly built a mission-critical Excel application. The application had over 5,000 users and was essential to the day-to-day operations of the division, because they actually restructured around it: if that application was not there, all of a sudden they wouldn’t have enough staff to deal with the workload.
“And that has a lot of benefits, but also a lot of risks because they don’t really know what they’re doing,” he said. “They are experts in their domain. They are not experts in computer science, so they don’t know the best practices. They may not know what rules apply in terms of, for example, company policy. And you don’t know who they are from a governance perspective.”
With these concerns in mind, participants discussed in practical terms how to obtain the potential benefits of democratizing AI while managing risk. The consensus was that a pragmatic, results-focused approach will create the most value. “We don’t need to reinvent the wheel,” noted one person. Throughout the workshops, participants advanced the idea that companies are going to succeed through what one called “old-fashioned virtues:” auditing what people are doing, testing projects for business value, and affirming existing risk management and governance systems.
What follows are some of the approaches offered by workshop participants, what one person called the “home truths” of AI adoption.
Sketching out what they saw as a key dilemma, participants discussed how AI technologies had captured the imaginations of their workforce and inspired a lot of ideas that people are advancing on their own. ”We’re in danger of getting ahead of our skis on this,” said one participant.
The technology’s powerful off-the-shelf capability by itself creates risks, participants noted, as the hype cycles through the organization. “If you have unreasonable expectations,” said one attendee, “you’re gonna be wasting a lot of resources.”
With employees broadly inspired by AI and looking for ways to use it, the first challenge is going to be simply knowing who is doing what with AI across the organization. “If you’re thinking about the company and trying to find those innovators, it’s really difficult,” said a participant. “If you’re thinking about the traditional IT things, of course, you go through the org chart, you see where the IT department is, and you will find where the programmers are. But these new gen AI innovators are not in the IT department — they’re everywhere.”
Tracking a rapidly-proliferating portfolio of AI projects, several participants proposed, calls for a proactive approach in collaboration with risk and IT teams. Scanning for activity through existing employee surveys and communications channels was one suggestion. Updating end-user computing practices to ensure AI activity and reporting is encompassed in the rules was another.
Since AI can do so many things, much of the discussion focused on how to ensure AI adoption is concretely valuable to the business. “We need to be focused on the use cases,” said one. “We have to be asking ‘What are we trying to do?’ as opposed to random people trying random stuff with AI tools.”
A recurrent theme was the importance of a results-oriented mindset. “We’re really focused on the pragmatic stuff,” said one. Participants consequently highlighted what they called a crucial risk that often goes overlooked — what one called the “not-delivering-value-to-customers risk” — and discussed how AI projects could be monitored for business outcomes.
Throughout the workshops, participants pointed out the need for an agile, AI-informed risk management strategy.
The potential for market disruption was a major focus. Participants noted that AI will give a competitive advantage to the companies that are best able to integrate it into their operations. “Integrating AI is not a question of replacing human effort but of augmenting it. This will lead to a significant market shift where enterprises leveraging AI outpace those that don’t,” remarked one participant. Others described how selecting AI projects designed to deliver concrete results — such as improved operational efficiency, better products, or a more tailored customer experience — will enable companies to navigate market disruption to their advantage.
When discussing reputation risks posed by AI adoption, participants made the point that incorporating safety early in a project provides a framework for being transparent about how you are using AI, ensuring privacy, and engaging ethically with the technology. Participants agreed that only an aggressively proactive approach to project design can create the clarity needed to navigate safely. “We need to be always asking: what could AI get up to that will put us at risk of not meeting our principles?” said one attendee. “Even if we have valid data collected responsibly, AI is just going to do its thing. A data point that is in compliance might not be after it is affiliated with other data.”
Similarly, participants discussed how proactively engineering safety can help ensure that a product will thrive in any regulatory landscape: “We don’t really know where the wheel is going to stop,” said one participant. “That creates risk. You have to get ahead of it.”
In the realm of governance, participants considered ways in which, rather than pursuing major changes, the focus could be on adapting existing systems and tools. Implementing AI “is just good change management,” said one attendee. Participants suggested appointing a cross-functional group to review existing controls like code of conduct, policies, systems, and software to determine where enhancements can be made.
While affirming the fundamentals of governance and risk management, participants also discussed how AI introduces novel challenges. There’s a need for alignment and integration across systems and silos in order to conduct comprehensive privacy compliance assessments, for example. And they shared the view that, as currently configured, corporate management is often missing these key capabilities. “We don’t really have the ability to assess AI risk,” said one attendee, “and we don’t have processes to actually apply controls across the business for all the AI stuff going on.”
The challenge of safely and productively implementing AI technologies — and harnessing their power to move the enterprise forward — is clearly going to need substantial organizational capacity beyond technical AI proficiency. Success will require a holistic approach to AI governance including:
We invite you to consider these pillars when designing the governance for your AI Strategy.