How businesses can balance AI innovation and cybersecurity

April 1, 2024

2024 Generative AI

By John Knechtel

The rise of generative artificial intelligence (genAI) is creating new cybersecurity challenges that call for a layered response built on the foundation of existing defences, concluded cybersecurity experts and executives at a recent Vector Institute Cybersecurity Risks and AI roundtable. The session was held to help companies balance the need for AI-driven innovation with robust cyber defense strategies.

Rapid innovation and growth in genAI technologies present a fundamental challenge, with attackers accessing new tools and manufacturing new threats daily. Yet, the same factors that are increasing the volume and scale of cybersecurity threats are also enabling companies to deploy stronger defences.

The technology is simultaneously broadening attack surfaces – the sum of the different points where a bad actor can try to penetrate a digital system – while allowing bad actors to develop more sophisticated attacks, raising the stakes for corporate security. “If AI is the next internet, cybersecurity will be paramount,” is how one workshop attendee framed it. 

The scale of potential harm is increasing. Cybersecurity Ventures, a research and market intelligence company, estimates that cyber damage globally will exceed $10 trillion by 2025. “You’re going to see more attacks with higher stakes, greater penetration, and greater facility,” predicted an attendee. “Some of the large breaches have cost hundreds of millions of dollars. Meanwhile, the costs for bad actors are falling. “It is very low cost to access, and when combined with ML and AI the type of attack that can be constructed scares me to death,” confessed a participant.

Blob
Blob

GenAI is broadening cyber attack surfaces while allowing bad actors to develop more sophisticated attacks, raising the stakes for corporate security.

Given this dynamic, companies are going to have to assess where they are most vulnerable and establish priorities. “Companies don’t have bandwidth to address everything. They will need to ask: what is the most important thing to protect? The risk is that they will stay away from looking because it is too expensive to fix.”

Literacy

Because of generative AI’s natural-language interface, it is no longer the province of technical experts: every employee can use it directly. This new reality – universal access through genAI – will require employees across the organization to implement related cybersecurity, privacy, and ethical considerations for the first time. Considering that often employees are not currently upholding already-existing cybersecurity policies, this poses a challenge.

“People don’t even follow the basics of cybersecurity rules,” said one participant. Looking at recent breaches, for example, “when you see how they got in, you see that we still have issues with fundamentals like passwords.” Just as worrying, some employees are simply going around their company’s policies and systems. In one case, a firm found their code posted publicly – the developers had used their personal computers and cloud services to bypass official channels.

In order to develop employee awareness and compliance, a programme of testing and adaptation is needed. As one participant described: “In our own experience at a bank, we focused heavily on the training side, with rigorous annual training and testing. The bank sends out test emails etc., to see whether employees have the appropriate awareness to click the links or not. Then we aggregate results to gauge the awareness level. If the awareness level is low, we do additional training. For organizations working on behaviours, putting in training protocols for teams working on sensitive data, that’s very tangible.”

Governance

Many companies feel vulnerable. They see AI “being embedded into every business process but nobody knows if their AI is secure.” One workshop participant reported that according to KPMG, only 56% of Canadian CEOs think they are prepared for cyberattacks and 93% worry that generative AI will make them more vulnerable to breaches. We are approaching AI very, very carefully,” said another participant. “Every use case is carefully evaluated, tested, [and] given safety scrutiny before it is accepted.”

Blob

Only 56% of Canadian CEOs think they are prepared for cyberattacks and 93% worry that generative AI will make them more vulnerable to breaches.

KPMG

Corporate control functions and audit teams will have to understand and mitigate AI risks, so AI expertise needs to be built into those teams. “When I look at the whole ecosystem for the business changing every leg on the table,” said one participant, “it’s clear management needs AI expertise.”

To be effective, companies need to “make sure the Board and senior management create the oxygen required for AI work,” said one participant. This leadership commitment will be required “to get the buy-in from the entire organization to move in a direction in a way you’re not used to behaving before.” 

Models

Extending cybersecurity infrastructure will take significant resources (cybersecurity compute requirements are increasing 4.2x per year, reported one participant) and sustained effort.

Large language models are hard to assess and complex to manage, so keeping tabs on them poses a tough challenge. As one participant put it: “All the providers are embedding AI in their products, so the invisibility is massive. There’s a lot of risk that will still come from that.”

The task of verifying and regulating models is more than any one company can manage on its own. Participants proposed an independent body to do this work. “We need more standardization, more enforcement of standards, more independent certification of quality. In our case, deploying AI without certification in live systems is out of the question,” commented one.

The potential for enhanced AI systems to support cybersecurity defences and mitigate risk is clear. “With a natural language connection to a live system, AI models can look across the global attack surface, which is pretty broad, learn correlations from successful attacks, isolate the more probable attacks, and fix the system to prevent them. There is a lot of value in that.”

But AI models need to work from rich data, like examples of successful attacks. “There’s just a sparsity of good examples to train off,” said a participant. Simulated bot attacks can be cheaper to test and this data can then be used to train the next generation of cybersecurity systems.

While the emergence of generative AI clearly poses significant challenges, workshop attendees suggested that, with the approaches outlined above, companies will be able to achieve an effective, layered defence.

Related:

2024
Internships

Transforming user experiences with AI: OJ Onyeagwu’s internship success

2024
Machine Learning
Research
Research 2024

Vector Institute researchers reconvene for the second edition of the Machine Learning Privacy and Security Workshop

2024
Health
Success Stories

Vector workshops give insights into responsible health AI deployment