AI thought leaders on adopting generative AI
July 11, 2023
July 11, 2023
At the Vector Institute’s second roundtable on Generative AI, participants discussed the reality of adopting generative AI and how it will change the workplace and reshape society.
Following the success and intense interest in the Vector Institute’s first roundtable on generative AI in March 2023, a follow-up was organized in late April. This second edition brought together an array of AI thought leaders, including researchers, founders of AI startups, and leaders of organizations who recognize the advantages of adopting generative AI technologies.
The session included a keynote address from Vector Faculty Member Gillian Hadfield, Director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto; a presentation on generative AI models and the future of work by Vector Faculty Member Frank Rudzicz; two panel discussions with leading thinkers in Canada’s AI ecosystem; and roundtable discussions that gave voice to the questions, ideas, and concerns of all participants.
In her opening keynote address, Hadfield acknowledged the current limitations and risks of working with popular generative AI models, such as the generation of false or misleading content. One path forward she sees involves building applications on top of large language models that can ensure reliable, high-quality output. As an example, Hadfield pointed to CoCouncil, an “AI legal assistant” built on GPT-4 but specifically designed to reliably meet high professional standards for document reviews, analyzing contracts, and other tasks.
Throughout her keynote and during roundtable discussions, Hadfield emphasized the revolutionary character of generative AI as a tool that will transform not just the technology world, but also numerous spheres of society, from education and law to markets, urban planning and more.
Given AI’s impact, she focused on the critical importance of taking a holistic view of generative AI that includes not just AI models and systems but also the laws, regulations, and norms needed to make safe and effective use of this transformative technology. Businesses tend to think of laws and regulations as external forces acting on and shaping their activity. A holistic approach, by contrast, recognizes that answering the challenges of AI will inevitably happen through methodical self-regulation and collaboration with government. Each industry adopting AI will deploy its expertise to create trustworthy, domain-specific AI systems. As Hadfield puts it, when it comes to responsible AI, “governance is the product. It is what you are designing, producing, and selling.”
Regulations create standards and obligations that make products and services safe and reliable. They help to ensure that our toasters don’t electrocute us, that our prescription medicines have no adverse effects, that our food is safe, and professional service providers such as accountants and lawyers have the knowledge and skills to help and protect us.
Just as we have regulatory infrastructure in place in all these and other areas of our lives, Hadfield argued that we need it for AI too, and for generative AI in particular.
To this end, she urged, we need to develop a new regulatory infrastructure that can accommodate the innovation and diversity of generative AI, while also protecting the public interest and safety. One step in this direction is creating a national registry for the largest and most influential models, where information about the size, scope, behavior, and impact of these models is made publically available.
Picturing our future lives with increasingly powerful AI is easier said than done. As Hadfield pointed out, we have a tendency to hold the laws, regulations, and norms that often invisibly structure our world as constant when we imagine where powerful new technologies will lead us. This, she says, is a mistake. Instead, we need to start imagining how the structures of our norms and laws should change given the power and potential of AI.
Vector Faculty Member Frank Rudzicz’s presentation spotlighted the massive improvements seen in generative AI in recent months–including the notable improvements of GPT-4 over GPT-3.
Like other participants, Rudzicz anticipates that workers will increasingly incorporate AI models into the workplace as tools–and those who do will tend to replace those who do not. He sees generative AI in its current form as useful for brainstorming, as well as creating and evaluating prototypes. Its ability to perform basic, time-consuming tasks of writing utility code already allows humans to focus more on the design and architecture of a program.
While our focus is very much on the latest and most powerful generative AI models, Rudzicz says that not all problems we face require this technology. There are many less powerful models that will be better for certain use cases.
He also noted that ChatGPT is built on a “confederacy of patches.” Its success stems from solving problems as they arise in an ad hoc manner. Some of these patches involve extensive human labour, for example, the widely-reported reliance on workers in Kenya carrying out the mentally scarring work of flagging toxic content so that the model doesn’t reproduce it. He believes that generative AI needs to get beyond this patchwork approach in order to achieve the robustness and control that we need for many different kinds of use cases.
Finally, Rudzicz echoed Hadfield’s emphasis on the importance of regulations. Any technology as powerful as generative AI needs strong guardrails to make it safe for us to use.
The panel and roundtable discussions addressed the growth potential, risks, and broader impact of generative AI, including on the future of work. Participants raised a wide range of issues covering the challenges companies face in adopting generative AI and the effects this technology is likely to have on the future of work.
Privacy was a key issue for participants. One panelist pointed out that the vast data sets used to train large language models often include personal information, yet it is difficult to determine whose privacy is at risk. Beyond training data, there are also issues around the privacy of information contained in prompts.
For many applications, we will need anonymized data that protects the privacy of individuals while allowing for useful data analysis. One tactic discussed is to leverage generative AI’s ability to use real data sets to create synthetic data that serves the same purpose without the risk of exposing sensitive information. Once generated, synthetic data can be tested to verify that it is safe to use, and further modified if necessary.
Another challenge discussed was the difficulty in getting from a demonstration of a new product to a final, stable product that works at scale. Generative AI has made it relatively quick and easy to create a new product demo, but every subsequent step gets harder. It remains difficult to estimate the time and cost of getting to a high-quality product, which makes it important for those working in this space to manage expectations.
Looking ahead, participants anticipated generative AI models that are more robust, private and secure, making enterprise adoption easier. They also underscored the importance of metrics to companies to be able to evaluate privacy, performance, and other parameters of AI models. Another theme of the discussion was the importance of educating teams on the nature, uses, and risks of AI tools at their disposal. And since generative AI technologies are often flexible, it is also important to be clear on what workplace tasks they are appropriate for to ensure their use is safe and efficient.
A number of participants described ways that their organizations are already using generative AI tools, including accelerating code reviews, drafting blog posts, generating graphics, and accelerating brainstorming sessions.
Looking at the long term, participants said they expect to see a bifurcation of jobs into those that are easier to automate and those that require dexterity and care, such as painting a room, that are more challenging machines. Many “low-skilled” jobs will prove to be hard to automate.
While some people have expressed hope that AI will reduce the amount of work humans are obliged to do, one participant noted that in 40 years of widespread adoption of computers in the workplace, there has been no reduction in our working hours, despite the increases in efficiency. We should not expect the advent of generative AI tools to change this trend.
The flip side of this observation, however, helps to stem worries that generative AI will lead to a massive loss of jobs, for example, in customer service. One participant anticipated that automating customer service functions could actually lead to a deeper engagement with customers and a stable outlook for employment in this area. By embracing generative AI, they said, companies will have a far greater capacity to create more meaningful forms of engagement with their customers, ensuring strong demand for customer service representatives to provide a human touch above and beyond what AI can deliver.
There was also a sense that society would become accustomed to and more accepting of generative AI tools. Some participants anticipated a marked decrease in the barriers to entry for tasks such as coding, and a reduction in the time it takes to produce work of value. Just as Microsoft Excel ushered in the widespread adoption of powerful computerized spreadsheets, generative AI will put powerful productivity tools in our hands.
As these and other issues surrounding generative AI continue to take shape, the intense interest in this technology shows no sign of abating. The next Vector Institute generative AI roundtable will be held in late summer. Check the Vector events page in the coming weeks for registration details.
Read about Vector’s first roundtable on Generative AI here.