Generative AI for Enterprise: Risks and Opportunities
September 28, 2023
September 28, 2023
At the Vector Institute’s third roundtable on Generative AI, participants discussed generative AI privacy and security issues as well as business opportunities and impact the technology could provide.
The Vector Institute’s first and second roundtables on generative AI prompted valuable conversations and intense interest in continuing the discussion as this technology rapidly develops. For the third roundtable in August 2023, Vector brought together AI leaders and practitioners from blue-chip Canadian companies with academic researchers to discuss the risks and opportunities generative AI poses for businesses.
A presentation on generative AI privacy and security issues by Vector Faculty Member Nicolas Papernot was a highlight of the session. It also featured two panel discussions. The first covered risks and privacy concerns, featuring leaders from EY, Private AI, Armilla AI, and MindBridge AI. The second covered enterprise-level opportunities and impact, and included leaders from Canadian Tire, Cohere, KPMG, and RBC, along with Vector Faculty Affiliate Marcus Brubaker.
In his presentation, Papernot proposed a strict and actionable definition of privacy: compare a system with a sensitive record to another system that is identical except the sensitive record is removed. If there is no difference between what an adversary can learn from the two systems, then the system is private. This concrete approach to assessing algorithmic privacy, known as “differential privacy,” has become the gold standard among researchers because it is intuitive, accurate, and useful.
Differential privacy forms the basis of techniques for preserving privacy without using unreliable methods of data anonymization. Instead, layers and distortions are introduced to prevent private data from leaking. For example, data is divided into subsets so that instances of a model are trained separately on different subsets. The outcome of a query aggregates the votes of the individual instances and masks the underlying data. Noise and privacy-preserving labels are also introduced, further preventing adversaries from acquiring sensitive data. Papernot says that with the right context and combination of these techniques it is possible to have a mathematical guarantee that privacy will be preserved and adversarial attacks will be unable to cause a data leak.
Papernot highlighted financial fraud as an application of this technique. Banks could use differential privacy to build a system that uses historical client data to recognize fraudulent transactions, without the risk of leaking the private information of any individual client.
He also addressed recursion, when machine learning systems are trained on their own outputs. The popularity of large language models has prompted numerous researchers and others to flag concerns about how LLMs will change as they begin to be trained on text generated by LLMs instead of written by humans. Papernot showed that the ultimate result of recursion is the collapse of the model. Over time, it begins to overestimate the most probable outcomes and underestimate less probable outcomes, magnifying the biases in the algorithm until its ability to make predictions loses all value. In the case of LLMs, it takes only nine iterations before their responses become gibberish.
Panelists for the discussion around the risks and privacy concerns in the enterprise use of generative AI brought years of experience navigating new technologies. They also have direct knowledge of the challenges involved in putting generative AI to good use.
Generally, panelists agreed that generative AI’s promise of huge productivity gains had created a goldrush. Yet a lack of sufficient guardrails for its adoption remains a concern. Some companies, panelists said, are not taking the issues seriously enough, while others are moving more slowly. One panelist described a client who came to realize that their company’s reliance on voice recognition as a security measure was suddenly no longer viable. In light of generative AI’s capabilities, the broader lesson was that they needed to review and reassess all their trusted processes, particularly as a tool in the hands of bad actors.
More cautious businesses have blocked their employees from accessing popular models on company-issued equipment. Others, however, are taking a more proactive approach, launching large-scale training programs to ensure that employees understand the risks and appropriate uses of generative AI. Panelists observed that legal teams are racing to establish norms around vendor relationships and contracts for third-party AI products and services. Ensuring that generative AI systems are protecting our privacy is critical for creating trustworthy AI, they emphasized. As long as companies are using widely-available base models whose code they have no access to, building trustworthy systems will be challenging.
For this reason, a panelist said, generative AI system vendors need to demonstrate that they have the necessary controls in how they build models. One way to do this is to have algorithms audited by a third party, with the results available to clients. Similarly, establishing a standardized certification system could help assuage vendor security practices.
One panelist suggested that there are no easy answers for corporate leaders aiming to mitigate privacy and security risks when deploying AI models. The best advice, they said, is to establish a set of clear responsible AI policies. Using this as a foundation, companies can then build more specific policies governing generative AI usage. Another panelist emphasized that many of the challenges posed by generative AI are best addressed through academic collaboration and dialogue.
The second panel considered generative AI’s potential as a foundational tool for businesses to improve efficiency and enhance productivity.
One panelist suggested that the finance industry is particularly well-prepared to embrace generative AI as a tool, since it already has well-established practices for managing risk and stringent regulatory requirements. The task is to build on those existing practices, carefully introducing new guardrails for any new tools handling each use case differently, depending on the degree of risk they introduce. A chatbot that gives financial advice to clients could be high risk, whereas a system summarizing and recommending news stories for an internal analyst to follow up on is less risky.
There was broad consensus that generative AI will lead to significant changes in customer service. One panelist envisions generative AI becoming a valuable tool that will empower human customer service agents to better support customers by efficiently providing them with the information they need. However, panelists also expected it will be used to open up new digital channels to support customers, giving them a better user experience.
Asked about the scale of transition that generative AI will bring, one panelist predicted it will be comparable to the massive change in the 1980s and 1990s, when first personal computers and then the internet reprogrammed how we work and communicate. They see the new tools, apps, and changes that generative AI will bring are best understood as the latest turn in the decades-long digital revolution transforming the workplace.
The final session of the day featured roundtable discussions that included all of the participants. They gave an array of answers when asked about their biggest concerns around the implementation of generative AI. Some emphasized their belief that AI regulations will shape its use. Others were focused on how to create tools clients and customers will actually use and the challenge of keeping up with this rapidly changing technology. One participant noted the importance of forums like Vector’s roundtables, because they allow AI practitioners and leaders to talk to each other, hear from experts, and discuss challenges and opportunities.
In light of all the uncertainty surrounding this technology, participants were eager to discuss how to move forward. One stressed the importance of organizations ensuring that there is a human in the loop, someone responsible for vetting generative AI outputs before they go to clients and customers. As another participant pointed out, those people will need specialized training to detect and amend erroneous outputs –especially when those outputs conform to human biases or machine hallucinations appear realistic or convincing.
Organizational change was another theme participants raised,echoing the previous insight that the transformation to come will be similar in scope to those of the 1980s and 1990s. Adopting powerful technologies tends to require new organizational structures. One participant said that to best take advantage, organizations will need to break down silos and facilitate important conversations about the tradeoffs involved in adopting generative AI technologies. What may be a boon for one part of an organization, they said, could create outsized problems for another part.
Participants also noted that government regulatory silos are also creating challenges. It’s critical for different regulatory bodies to start talking to each other to assess how their different sets of regulations may conflict, said one participant. It’s a particularly pressing issue for the financial industry, which is already required to conform to an array of different regulations.
Meanwhile, as AI regulations continue to be discussed and devised, organizations and industries that are not accustomed to stringent government oversight will have to play catchup and implement appropriate practices to meet their regulatory obligations, like documenting and reporting on their uses of AI.
Some participants mentioned concerns from their teams about implementing AI without sufficient clarity on how it will be regulated. One panelist believes that the best option is to forge ahead with AI while sticking to a sensible roadmap: start with basic principles governing the use of generative AI, articulate how those principles apply in more fine-grained detail for any proposed use cases, and then build generative AI tools designed to reflect those principles. Following this path, they suggested, will make it more likely that their AI tools will align with the coming regulations.
Following the roundtable, lively and spontaneous discussions continued as participants enjoyed lunch, shared their experiences, and exchanged ideas. As generative AI transforms how companies work, create, collaborate, share information, and engage clients and customers, these conversations will continue to be an invaluable way to keep up with the pace of change.
Read about Vector’s first and second roundtables on Generative AI.