How to put Generative AI to good use?

April 20, 2023

Generative AI Insights

Participants in a recent Vector Institute roundtable on Generative AI discussed the risks and immense potential of this much-hyped tech and proposed some novel ideas for putting it to work.

The public release of large language models like ChatGPT, Bard, and text-to-image generators such as DALL.E and Stable Diffusion have suddenly put these powerful generative AI tools in the hands of millions of users worldwide. As people explore the incredible potential of these tools for work and creativity, there is growing concern among public, industry, and AI researchers about how they will transform our lives.

In late March 2023, the Future of Life Institute (FLI) published “Pause Giant AI Experiments: An Open Letter,” arguing for a six-month halt on “the training of AI systems more powerful than GPT-4.” The letter accrued thousands of signatures within days, including from some notable AI researchers and leaders in the field – even a few who publicly disagreed with certain aspects of the letter.

The accelerating drive to build ever-larger generative AI models and concerns that triggered the FLI letter have played out in the American tech industry. In Canada, the federal Pan-Canadian AI Strategy and deep collaboration between governments, universities, and industry have instituted an abiding commitment to mitigate risks and use this technology to advance the public good. Our federal and provincial governments recognize that public investment and engagement are essential for the development of trustworthy AI.

So, what hopes and concerns are top of mind for Canada’s leading generative AI researchers and entrepreneurs?

On March 30, the Vector Institute convened the first of a series of roundtables bringing Vector researchers together with leaders from AI-focused companies to discuss the potential and risks of generative AI and the impact it will have on the future of work. The event included panel discussions with Vector Faculty Members and Canada CIFAR Artificial Intelligence Chairs Jimmy Ba and Alireza Makhzani along with visionaries from Vector’s sponsor community and other leading AI builders across Canada. Throughout the session, discussions continually returned to the theme of responsible development and use of Generative AI, to think through how we can harness the vast potential of this technology while mitigating risks.

Immense Potential

From the start, a palpable enthusiasm about Generative AI filled the room. Participants noted that generative AI has already been successfully deployed to power content marketing, transcription services, and other applications before the recent explosion of interest.

While a number of widely discussed applications came up – such as content generation and customer service – participants noted some up-and-coming use cases for generative AI, such as creating music and videos from text prompts.

There was also a strong interest in the idea of personalizing and fine tuning models for individual use. For example, a private, secure model trained on your own data would be far more useful for writing emails than a universal chatbot. Such a model could automate mundane tasks for us, such as filling out forms.

Another prime area of application participants considered was education. For example, chatbots could be designed with deep expertise in specific topics and a strong aptitude for fielding questions and explaining complex ideas. The development of enriching, informative conversational learning experiences could lead to a wide range of invaluable applications, from tutoring schoolchildren to upskilling professionals.

From a management perspective, participants also considered how simple and time-consuming tasks could be automated, freeing employees up to spend more time on higher value work that is not easily automated.

The Risks Are Real

Discussing their shared concerns about generative AI, participants raised a wide range of issues. These included bias; hallucination (i.e., false content generation); violations of privacy; susceptibility of models to jailbreaking leading to harmful content; and abuse of generative AI as a tool for propaganda and manipulation or to impersonate individuals. 

A recurrent theme participants discussed was the nature of trustworthy generative AI. Earning the trust of users of generative AI models requires verifying their reliability and effectiveness. For example, as one participant put it, we want to ensure that doctors incorporating AI models in patient diagnosis get accurate information and guidance.

Sketching a path to trustworthy generative AI, participants discussed empirical solutions, such as sharing data about models and allowing third-party audits. There was also a strong recommendation for companies working with generative AI to pursue self-regulation in advance of government legislation, such as that contemplated in Canada’s proposed Artificial Intelligence and Data Act.

Another theme that emerged related to the practical concerns of businesses developing and using generative AI. One participant made the point that, because companies typically rely on cloud computing services, the cost of offering a generative AI-based product is driven by usage. A sudden spike in usage can become expensive quickly, especially for companies that have not yet monetized their products.

Accuracy is another key issue from a business perspective. Participants described how it is often easy for models to achieve up to 90 per cent accuracy quickly. However, greater accuracy is essential for many applications, and netting further gains beyond 90 per cent becomes increasingly difficult the further one gets.

Taking a wider view, participants considered ways that deployment of generative AI could lead to significant changes in corporate structures. Generative AI could change how a lot of work gets done, and large organizations need to be ready to adapt in order to stay competitive. For example, one participant described how some companies may transition to become structured more like dynamic collections of teams contracting with each other to drive the creation of products and services. The consensus was that, one way or another, a significant transition is on the horizon, and it will be essential to manage it effectively.

Generative AI we can Use

With these concerns in mind, participants discussed how to think about solutions and opportunities in a world transformed by generative AI. There is reason for hope. The massive attention generative AI models have garnered has brought with it careful and even playful scrutiny, for example, through the rapid development of prompt engineering and jailbreaking as new forms of expertise. These spontaneous efforts have helped us understand the strengths and limitations of generative AI, and see to ways to improve them.

Participants looked forward to the development of highly specialized large language models trained on domain-specific texts and data to help make their outputs more reliable for use in specific professions, such as law or medicine.

One suggestion was to approach trustworthiness above all as a product design issue. The idea is to address the issue of trust at the particular level of the system that incorporates an AI model rather than place the onus of trustworthiness on the model itself. Note the underlying insight here: individual generative AI models, in and of themselves, should not always be conceived as the locus for a solution to challenges they introduce.

This same insight showed up in two more recommendations participants made: one recommendation was to explore new ways to coordinate the use of multiple models working together, to counteract their individual weaknesses and achieve better outputs. The other recommendation was to investigate how AI can be deployed to oversee or “police” other AI models and improve their ability to track, verify, or and hedge their outputs. 

Throughout the discussion, participants in this first Vector generative AI roundtable highlighted the need for collaboration across sectors and industries to make AI more trustworthy. This could take the form of more discussions bringing industry leaders and researchers together, and practical solutions, such as creating a dedicated platform for companies working with generative AI to share their research, white papers, and findings.

What emerged was a promising idea: the shortest path to trustworthy generative AI with extraordinary applications is paved with open collaboration and the healthy exchange of ideas. 

The Vector Institute is wholly committed to supporting this effort, responding dynamically to the needs of Canada’s growing AI community, and advocating for responsible development and deployment of AI. The best way to mitigate the risks of powerful AI is to democratize that power.

Read about Vector’s second roundtable on Generative AI here.

Related:

Man typing on laptop
Generative AI

How businesses can balance AI innovation and cybersecurity

Three people stare at a laptop with a Vector logo on it
Generative AI
Research

Benchmarking xAI’s Grok-1

A man looks at a white board with red formulas on it
Insights
Trustworthy AI

How to safely implement AI systems