Vector workshops give insights into responsible health AI deployment

August 21, 2024

2024 Health Success Stories

Trustworthy and safe health AI deployment recognizes urgency, addresses a specific need, and delivers genuine value to users. This was the consensus at the Principles to Practice: Enabling Responsible AI in Healthcare event co-hosted by the Vector Institute and Vector Gold sponsor EY. A follow-up technical workshop was held shortly after. 

Held on May 13, 2024, the event brought together over 80 health care leaders from private, public, and research sectors. With a goal of providing insights into health AI deployment to our partners, it focused on sharing tangible deployment steps to promote safety, including strategies for monitoring AI models throughout their lifecycle, ensuring data integrity, and implementing rigorous validation and maintenance processes. Part of Vector’s commitment to responsible AI, the event gave leaders actionable steps and concrete tools to develop and integrate AI into clinical settings safely. 

Vector researchers get real about AI limitations and how to pivot

Developing and deploying AI models safely in health care isn’t always straightforward. To date, only a small proportion of health AI research initiatives have been successfully translated into clinical settings, to date. This is often due to challenges with developing and implementing robust deployment strategies that incorporate the necessary steps and best practices. Clinical use case presentations highlighted how AI can address pressing healthcare issues when deployed responsibly.

Vector Faculty Affiliates Amol Verma and Fahad Razak — who also co-founded the health data-sharing network GEMINI and partnered with Vector to enable GEMINI for AI/ML discovery — delivered a presentation on an AI initiative for delirium, an acute confusional state and a significant problem faced by clinicians. They demonstrated the groundwork behind implementing AI to identify when a patient has delirium and a significant challenge for clinicians to identify early enough to deliver preventive care. Up to 40% of cases are preventable with simple interventions, but delirium is not well captured in logged data. The GEMINI team identified this as an opportunity to use machine learning (ML) for quality improvement and developed a scalable AI tool for accurate delirium detection and risk prediction. They have been using CyclOps, a Vector Institute open-source tool, to assist in monitoring the model’s performance.

Blob

“Only a small proportion of health AI research initiatives have been successfully translated into clinical settings. This is often due to challenges with developing and implementing robust deployment strategies that incorporate the necessary steps and best practices.”

Vector Faculty Affiliate Benjamin Fine, who is also a clinician scientist at Trillium Health Partners (THP), walked the audience through a process to evaluate the safety of a commercially procured AI tool for triaging patients with acute stroke perfusion. Fine said that every second counts in these situations, so when you have a model that is assessing and triaging a time-sensitive condition, proper performance is vital. Fine and the AI Deployment and Evaluation (AIDE) lab at THP are following best practices and conducting monitoring across the entire AI product life cycle using CyclOps to ensure the solutions they are using remain effective.

Also onstage were Vector Faculty Member Michael Brudno and Vector Faculty Affiliate Chris McIntosh who spoke about building a new AI model for detecting pneumothorax at the University Health Network (UHN). “Two AIs are better than one,” said Brudno when speaking about some initial hardships faced during development. UHN’s Data Aggregation, Translation, and Architecture (DATA) team was receiving a lot of false positives; they pivoted and created one model for detecting the presence of pneumothorax and another to predict the presence of chest-tubes. Now, after successful deployment into an existing radiologist dashboard named Coral, they are using CyclOps to monitor the model’s performance over time. 

The event also included remarks from Roxana Sultan, Vector’s Chief Data Officer and Vice President, Health, and presentations from EY partners Dai Mukherjee and Shannon MacDonald, and Vector’s Ryan MacDonald, Director, Health AI Implementation, and Carolyn Chong, Senior Product Manager. A keynote panel featuring Jennifer Gibson (University of Toronto), Vector Faculty Affiliate Devin Singh (Sickkids, Hero AI), and Cathy Cobey (EY) and moderated by Safia Rahemtulla (EY) closed the event with a discussion on the pace of safe AI implementation. 

“Modeling” best practices

Building on the success of the Principles to Practice event, Vector hosted a follow-up technical workshop on June 20, 2024. This session focused on CyclOps, delved into the technical aspects of monitoring ML performance, providing developers with tools that help make AI solutions safer. 

During a roundtable discussion, participants echoed the need for robust change management strategies to foster trust in AI within their organizations. They also suggested the importance of refining evaluation methods to accurately reflect the real-world model performance and to provide meaningful insights into clinical impact. 

Healthcare leaders and ML developers share a responsibility to ensure that AI-enabled tools are not only innovative but also trustworthy and safe. The discussions, insights, and best practices shared during the event and workshop stress the urgency of deploying AI in a manner that genuinely benefits patients and clinicians alike. By fostering collaboration and prioritizing ethical considerations, these gatherings set a strong foundation for the continued responsible integration of AI in healthcare.

A forthcoming white paper with key insights from the event and Vector’s recommendations to address the most pressing health AI implementation challenges is due in Fall 2024.

Related:

2024
Internships

Transforming user experiences with AI: OJ Onyeagwu’s internship success

2024
Machine Learning
Research
Research 2024

Vector Institute researchers reconvene for the second edition of the Machine Learning Privacy and Security Workshop

Headshot of Vector Faculty Member Wenhu Chen
2024
Insights
Research
Research 2024

Vector researcher Wenhu Chen on improving and benchmarking foundation models