Vector Institute hosts first-of-its-kind Generative AI Leadership Summit

January 18, 2024

Generative AI Insights

By Kevin Temple

The rise of generative AI is a game-changer in the tech world, sparking fast-paced innovation while creating new challenges for regulators. Canadian companies are figuring out how to use generative AI to push their mission forward, serve their customers better, and increase efficiency. 

The Vector Institute has been right in the middle of this conversation, previously hosting three roundtable events in 2023 that brought together our AI researchers with leaders from Canada’s biggest companies and hottest AI startups. The goal: make connections, share knowledge, and find the best path forward with this exciting new technology.

These conversations culminated in October, when Vector convened the Generative AI Thought Leadership Project to discuss the challenges and opportunities of embracing generative AI today. Over three and a half days, Vector brought together 170 experts and executives responsible for implementing AI in their organizations to discuss issues that are top of mind. The event was attended by more than 32 large and small organizations from across Canada.1

It started with an opening plenary, followed by three days covering core themes for adopting generative AI: business strategy, technical execution, and control and governance. The daily program included keynote presentations, followed by a panel discussion, and then breakouts into smaller groups for more inclusive roundtable discussions. Each day ended with a workshop covering strategies that organizations can put into practice as they build out their plans for adopting generative AI.

Keynotes Presentations and Workshops

The plenary featured opening remarks from Cameron Schuler, Vector’s Chief Commercialization Officer and VP, Industry Innovation. As he pointed out, the goal of the project is to discuss how to maximize value and minimize risk of working with generative AI. Schuler compared our situation today to the transformation of the workplace by personal computers. “It’s like we are still learning how to type,” he said.

Blob

“It’s like we’re still learning how to type.”

Cameron Schuler

VP Industry Innovation, Vector Institute

Michelle Bourgeois and Vik Pant (PwC) gave a keynote presentation about how companies are quickly moving from enthusiasm to asking questions about measurable business value. A frequent mistake clients make, Pant explained, is that they choose a model before understanding how to make it an effective tool. A better approach is to start with a business objective, then determine the model or system that will best serve this purpose. He also emphasized the value of choosing the least complex model.

The session also featured presentations from Google cloud data scientist Rihana Msadek and NVIDIA senior director Tony Kaikeday, who spoke about the incredible generative AI products and services their companies have developed for enterprise clients. Obimdike Okongwa from the Office of the Superintendent of Financial Institutions (OSFI) AI/ML working group also gave a presentation on how the agency is thinking about the risks and benefits of generative AI.

Business Strategy

The second day of the summit focused on business strategies and practical challenges for implementing generative AI. Discussions covered content generation for marketing, automating routine tasks, customer service innovation, reskilling and upskilling employees, human involvement in critical domains, trustworthy AI, and more.

The first keynote presentation was given by Chris Mar, PwC’s national transformation and strategy leader. Mar discussed how large companies divided into different functions or silos often need to develop new practices and organs of decision-making in order to adopt AI. He emphasized that decisions about how to institute change are best made by giving voice to the interests of all stakeholders.

The second keynote was from GPT Zero CTO Alex Cui, a former Vector researcher. Cui described data poisoning, hallucinations, and other challenges businesses face when they make use of LLMs. He explained how transparency is critical for trustworthy AI, so the public can understand how content is generated. Cui also described GPT Zero’s AI text detection platform, which allows organizations to share certified writing with the world.

The last session of the day was a workshop with PwC’s Vik Pant and Bahar Sateli titled, “Going from What to So-What in GenAI.” One focus of their talk was the strategic importance of determining a use case and then assessing which technology best supports it. A proper assessment combines technical objective metrics, cost resource metrics, and subjective metrics. In some cases, said Pant, generative AI tools turn out to not be the best option. In other cases, an assessment may lead clients to build a tool or system from an ensemble of models. Pant and Bahar described how a careful, process-oriented approach allows businesses to capture the priorities and goals of all stakeholders, earn trust, and create value. 

Technical Execution

The next day, the focus turned to the topic of technical execution. The discussions covered cutting-edge architectures, adapting generative AI for specific industries, and the technical means to address different forms of risk and ethical concerns that arise with the use of generative AI.

In her opening keynote presentation, Queens University professor Tracy Jenkins shared the results of her research into how LLMs reproduce human behavioral biases, such as loss aversion. She and her team have also found evidence of biases produced by LLMs themselves, along with more basic failures, such as errors of logic and calculation. Jenkins explained the need for measuring the biases and failures of LLMs and determining what is an acceptable level of performance for a given use case.

The second keynote was provided by Akshaya Mishra, from EAIGLE Inc. He and his team created a multi-modal AI system to automate real-time asset tracking for transportation companies. Mishra described some of the technical challenges of creating this system, including explaining the decisions of the model well enough to debug failures and managing its power consumption.

The workshop, “Generative NLP: Capabilities, Customization, and Fairness,” given by David Emerson, a U of T professor and Vector machine learning scientist, closed out the third day. Emerson began with a formal description of LLMs and how transformer algorithms have supercharged the abilities of chatbots and other generative AI tools that are proliferating today. He showed how LLMs are not only good at generating text, they are also powerful tools for classifying, extracting relationships among terms, quantifying bias, and detecting synthetic text.

Control and Governance

The final day focused on issues of control and governance, such as regulatory compliance, data provenance, and accountability mechanisms. The conversations centered on the responsible and safe deployment of generative AI.

Blob

“We need to weigh the opportunities against the risks.”

Tony Gaffney

President and CEO, Vector Institute

The first keynote was given by Tony Gaffney, Vector’s president and CEO. He noted that we have an historic opportunity with the rise of generative AI, but one that requires us to take seriously the responsibility of making this technology safe and trustworthy. With AI advancement happening faster than anyone expected, Gaffney said, we need to weigh the opportunities against the risks. To help mitigate the risks, Vector developed a code of conduct and set of AI principles, and is now working with organizations to put them into practice. Gaffney emphasized that the development of AI must be guided with care to ensure this technology augments human well-being.

Mark Paulson, Associate VP for Enterprise IT Governance at Canadian Tire, provided the second keynote. He described how it is common for companies to accumulate tools. But taking on too many copilots and other generative AI tools can create an unmanageable degree of risk. To establish a generative AI governance framework, Paulson said, he and his team created guardrails so that the company could more effectively leverage the tools they already had. Paulson also described how generative AI can itself be a governance tool since it can review, describe, and fix code. Paulson’s advice was to make generative AI tools accessible and create an inclusive community for using them effectively.

Dan Adamson, co-founder and CEO of Armilla AI, provided the final workshop, “Who’s in charge here? Generative AI governance and controls to protect us.” Adamson described how Armilla AI set out to solve the challenge of building trustworthy systems. A central issue for companies adopting generative AI, he said, is to implement safeguards that are specific to their use cases. A methodical approach to using generative AIwill combine a number of responsible AI strategies, such as:

  • Maintain a comprehensive inventory of AI tools and assess the risks each one presents
  • Keep track of the potential impact of different models so models with different risks can be treated differently
  • Assign clear roles and responsibilities, such as who is accountable for determining whether a model is biased, and who is tasked with ensuring appropriate transparency
  • Employ safeguards across the entire lifecycle of an AI model, from the earliest stages, through validation, deployment, and production
  • Establish criteria for when an AI tool should be independently assessed, for example, to meet new regulations in a particular region
  • Require foundational training for employees tasked with using AI tools
  • Establish clear rules with base model providers and vendors that determine how data will be used and establish who is responsible in different kinds of scenarios

Adamson ended the workshop by inviting participants to share their questions in several key areas, including generative AI design, data acquisition, development, deployment, production, and the use of third-party vendors. The ensuing discussion covered bias assessment and fairness; the risk of data poisoning; the challenges of fine tuning; what appropriate testing looks like during development; what to do when your vendor for a deployed model goes down; and the importance of vetting third party vendors.

Panel and Roundtable Discussions

For many participants, the panel and roundtable discussions were the highlight of this project because they allowed them to voice their ideas and concerns about generative AI and learn how others are thinking about similar challenges. Over the course of four days, across different themes, participants repeatedly returned to four topics:

1. “The first rule of AI is don’t use AI.”

Several keynote speakers, panelists, and participants made the point that it is far too easy to jump in and start using generative AI tools without much thought. To use these technologies strategically, it is vital to start by thinking critically about the use cases and asking why a given AI tool should be used before any generative AI technology is adopted. As participants pointed out, generative AI introduces uncertainties and risks that can be avoided with other tools, so using it needs to be justified. As one participant put it, “the first rule of AI is don’t use AI.”

2. AI education and literacy are essential

There were numerous conversations about the importance of education as a central concern underlying generative AI business strategy, technical execution, and governance. The consensus that emerged was that the private sector needs to focus on improving AI literacy at all levels, both within organizations and also the public at large. It is not just a matter of abstract understanding but also concrete know-how, noted one participant. The idea is to make sure people are informed enough to understand the risks and ask the right questions regarding the safe and appropriate use of generative AI.

Another participant pointed out that many people do not understand that generative AI chatbots are essentially prediction machines, meaning that they have difficulty anticipating where the strengths and weaknesses of the technology lie. Participants also linked AI literacy to risk reduction. Training could focus on principles for AI risk and cover issues such as privacy, security, and data. Understanding the nature of generative AI risk is critical for making good decisions. Finally, participants discussed education as a way to mitigate employee concerns about the use of generative AI. Many were keenly aware that the technology could result in workforces shrinking with some workers left behind. Training employees to use generative AI in ways that increase their value to the company can be a means to mitigate these concerns.

3. Adopting AI means organizational change

Another widely discussed issue was the organizational change required to adopt powerful AI technologies. One participant said that building a proof of concept is easy, but the challenge lies in scaling solutions and integrating them into the company’s processes because it requires changing how teams work. Another participant noted that risk and compliance functions don’t speak the same language as data scientists and ethicists — they need a translator to get them on the same page. Large organizations need to actively work across different functions and units to build consensus and develop principles and guidelines for using generative AI effectively and responsibly. 

Recognizing that business-driven AI adoption faces an uphill battle, some companies are already pursuing changes in organization and culture from the top down. One participant described how their company has established rules and guardrails for using generative AI. They formed a committee representing teams from across the organization that uses broadly endorsed criteria to approve and guide the uses of generative AI.

4. These conversations are vitally important

The fourth takeaway reflects the importance of broad and inclusive conversations for advancing the use of generative AI. Participants said they valued the format of the roundtables because it allowed them to hear how others are responding to its challenges. The exchange of ideas enabled participants to learn where other organizations are on the path to adoption and get a sense of what their own next steps could be. 

 Summing up the four-day conference, one participant said that forums like the Generative AI Thought Leadership project are necessary so that companies can speak with one voice about their investment in AI trust and safety. The more diverse and inclusive the conversations are, the better their ability to communicate to the public the private sector’s deep commitment to responsible AI.

So what next?

Over three-and-a-half days, project participants learned that blindly adopting AI without a strategic approach doesn’t cut it. Asking the hard questions, justifying AI’s use, and ensuring it’s the right tool for the job is important. Additionally, education emerged as a cornerstone of this next phase of adoption: organizations need to boost AI literacy across all levels to help manage risks. And let’s not forget, adopting AI strategically is more than just playing with ChatGPT — adopting AI requires organizational change and new ways of working across teams.

Summit attendees highlighted the importance of Vector’s roundtable discussions, providing an avenue to share experiences and plot the AI adoption journey collectively. This is why Vector is calling on industry leaders, organizations, and researchers to get involved in collaborative projects that move beyond discussion and start making the adoption of responsible AI the norm. 

Learn from the experts who’ve been there, be the voice that shapes AI trust and safety, and move forward with us. Check the Vector events page for new opportunities to join the conversation or contact us today info@Vectorinstitute.ai


1 Participating companies included BMO Financial Group, Google, NVIDIA, RBC, Scotiabank, TD Bank Group, Bell Canada, Boehringer Ingelheim (Canada) Ltd., Canadian Tire Corporation, Ltd., CIBC, KPMG Canada, OMERS, Sun Life Financial, TELUS, Linamar Corporation, CentML Inc. Private AI, Troj.AI, and with special thanks to PwC Canada, who provided event space to Vector to host the event. Additionally, six Vector Fastlane companies —Armilla AI Inc., Eaigle Inc., Ethical AI Inc., Fairly AI Inc., GPTZero Inc., PredictNow Inc. — actively participated. Our ecosystem partners included the Canadian Bankers Association, Canadian Marketing Association, GovTechON, Global Risk Institute, Law Commission of Ontario, and the Office of the Superintendent of Financial Institutions.

Related:

Man typing on laptop
Generative AI

How businesses can balance AI innovation and cybersecurity

Three people stare at a laptop with a Vector logo on it
Generative AI
Research

Benchmarking xAI’s Grok-1

A man looks at a white board with red formulas on it
Insights
Trustworthy AI

How to safely implement AI systems