How SMBs can manage the opportunities and risks of deploying AI

July 19, 2023

2023 Generative AI Insights Trustworthy AI

Vector’s Collision Conference masterclass emphasized the economic opportunities for SMBs leveraging AI, while suggesting ways these firms can navigate potential risks.

By Ian Gormely

The past year has seen a massive shift in the public perceptions of AI and there are reasonable concerns about the rapid development of powerful AI models. AI is no longer something used only by tech companies. There is increasingly broad adoption of AI occurring across multiple industries and sectors and among the general public with quickly evolving generative AI models.  

Given this crucial inflection point, the Vector Institute chose to use its recent Masterclass panel at this year’s Collision conference to not only emphasize the economic opportunities for small and medium-sized businesses (SMBs) that leverage AI, but also help these firms navigate the benefits and risks at this critical juncture.

Responsible AI for SMBs: Insights from AI leaders

Called “Responsible AI for SMBs: Insights from AI leaders,” the session was led by Deval Pandya, Vector’s Vice President of AI Engineering, with Vector Faculty Member Sheila McIlraith, who is also a professor in the Department of Computer Science at the University of Toronto and Associate Director and Research Lead of the Schwartz Reisman Institute for Technology and Society, James Stewart, CEO and founder of TrojAI, and Mardi Witzel, Vice President, AI Governance Programs at and Board Member of PolyML rounding out the panel. 


“We are in a truly transformative moment. It’s a really exciting time… but also a time for some care and some risk…we’re in a little bit of the wild west.”

Shelia McIiraith

Professor in the Department of Computer Science at University of Toronto and Associate Director and Research Lead of the Schwartz Reisman Institute for Technology and Society

Until last November, SMEs often found it difficult to find a use-case for their business, said Witzel. “This is what has changed with generative AI.” Now, she says it’s easy for businesses to see the low-hanging fruit use cases, a development that will lead to “a bottoms-up cultural change.” 

For this to happen though, businesses need to be in control of their data. “It’s your data that enables you to derive insights, and AI is the tool essentially that allows you to mine it.” She pointed to an auto parts manufacturer that’s used AI to improve welding efficiency and a utility building an AI-enabled customer service chatbot as examples of AI as examples of Ontario companies that are already leveraging the technology to improve their bottom line. 

Indeed, as AI increasingly becomes a competitive differentiator, Pandya speculated that one of the biggest risks for SMEs could be not integrating the technology into workflows. But all agreed that a rush to adopt AI into businesses could perpetuate safety and fairness issues, leaving companies of all sizes vulnerable to reputational and financial risks. 

Stewart, whose company helps its clients protect their AI systems, says he’s already seeing the full spectrum of reactions to the mainstreaming of generative AI tools like ChatGPT. Some companies are banning it outright, while others are encouraging employees to experiment with AI tools to find areas of competitive advantage. 

But he sees neither as the best approach. “To me, risk comes down to mitigating possible harm,” he said, which can be done by quantifying the bias, fairness, interpretability, explainability, robustness, security, and privacy of a model. 

While McIlraith, Witzel, and Stewart all agree that guardrails are necessary to ensure that the technology’s biggest risks don’t come to pass, they were of different minds as to who and how they would be erected with policymakers, government regulators, AI developers, and individual companies all cited as potential avenues for regulation. 

Ultimately, all three panelists did agree that it was incumbent on businesses to educate themselves as they implement AI. Pandya pointed to Vector’s recently released AI Trust and Safety Principles, which SMEs can use as a guide as they develop their own code of conduct and AI policies.


“You have a duty of care to ensure that you’re using these technologies responsibly. It’s up to the business to balance the risk against the innovation.”

James Stewart

CEO and founder of TrojAI

For its part, Vector will continue working with governments, policymakers, and stakeholders to establish guardrails and guide responsible adoption. While there will be bumps along the way, as there are with any new technology, McIlraith remains optimistic. “We will figure this out.” 


Vector Faculty Member Gautam Kamath
Research 2024

Vector researcher Gautam Kamath breaks down the latest developments in robustness and privacy

Research 2024
Trustworthy AI

World-leading AI Trust and Safety Experts Publish Major Paper on Managing AI Risks in the journal Science

Large Language Models

Standardized protocols are key to the responsible deployment of language models