Resources: Projects, tools, frameworks and best practices
Put AI trust and safety principles into practice
Looking to put AI Trust & Safety principles into practice? We’ve curated a robust collection of resources to guide your journey. From case studies showcasing real-world implementation in Health and Financial Services to open-source tools and frameworks, our resource hub has everything you need to build and deploy responsible AI systems. Check out our AI trust and safety resources:

AI systems: Evaluation, modeling & benchmarking
Vector’s AI engineering team has developed tools, including a framework that puts the principles into practice to guide the integration of AI into products. The framework, which will be launching soon addresses each stage of the product development life cycle. Check out some of our other open source solutions and resources below for both AI researchers and practitioners across sectors.

UnBIAS is a cutting-edge text analysis and debiasing toolkit that assesses and corrects biases in textual data. It was built to tackle the inherent biases in AI systems and promote the ethical use of LLMs. UnBIAS addresses the urgent need for accurate and unbiased information dissemination in today’s digital age and promotes fairness and ethical AI practices.

A new interpretability framework for generative models, providing a user-friendly software interface for interacting with large-scale models spread out over multi-GPU and multi-node setups. FlexModel contributes to making advanced AI research more inclusive and universally approachable, lowering barriers and increasing their safety and trustworthiness.

CyclOps is a set of evaluation and monitoring tools that health organizations can use to develop and evaluate sophisticated machine learning models in clinical settings across time, locations, and cohorts.

The Principles in Action Playbook, developed by Vector’s AI Engineering team, contains guidance, examples, and tips to help those building products that leverage AI to do so responsibly. It was created to help professionals on the front lines – entrepreneurs, product managers, designers, tech leads, and domain experts – navigate the complexities of AI product development.

A foundational model from Vector’s AI Engineering team to optimize model deployment on-demand, empowering researchers and industry partners who may not have technical knowledge by allowing them to dive right in and begin experimenting with foundation models. A key enabler to research and adoption, the 2022-23 year saw almost 100 users from Vector’s community use Kaleidoscope over 700,000 times.

Responsible AI Startups (RAIS) Framework
Underwriting Responsible AI: Venture capital needs a framework for AI investing – Radical Ventures launched this framework, supported by Technical advisory from Vector Institute. RAIS is intended as a guide for early-stage companies creating and using AI as a meaningful aspect of their product.

FLorist is a platform to launch and monitor Federated Learning (FL) training jobs. Its goal is to bridge the gap between state-of-the-art FL algorithm implementations and their applications by providing a system to easily kick off, orchestrate, manage, collect, and summarize the results of FL4Health training jobs.

FL4Health is a flexible, modular, and easy to use library to facilitate federated learning (FL) research and development in healthcare settings. It aims to make state-of-the-art FL methods accessible and practical, These include personalized algorithms meant to tackle difficult settings such as heterogeneous data distributions which are common in clinical settings.
Applied AI safe and trustworthy projects, workshops and courses
Vector convenes large enterprises, startups, AI researchers and policymakers to use AI to test, experiment and solve AI trust and safety problems together – uncovering insights and leading to innovation. In this format, projects can take two forms – 1) technical where code is built for organizations participating, or 2) thought leadership and insights driven – to help organizations create building blocks for their own safe and trustworthy deployment of AI solutions. Read more about our collaborative project work below.
Managing AI Risk

How to safely implement AI systems

Generative AI for Enterprise: Risks and Opportunities

Vector Institute hosts first-of-its-kind Generative AI Leadership Summit

Trustworthy AI Themes for Business from the Vector Community

Bias in AI Program: Showing Businesses How to Reduce Bias and Mitigate Risk

Safe AI implementation in health: Why the right approach matters
Vector’s Trustworthy AI series:
Past courses, webinars and events
AAAI Workshop on Responsible Language Model (ReLM) 2024: Organized by the Vector Institute, this workshop provided valuable multidisciplinary insights into the ethical creation and use of Language Models, addressing critical aspects such as bias mitigation and transparency. Attendees gained knowledge on a range of topics, including explainability, robustness, and ethical AI governance, contributing to the advancement of their education in creating safe and trustworthy AI technologies. To read the supplements, papers, and outcomes – visit here.
Defining AI’s guardrails: a PwC-Vector Fireside Chat on Responsible AI (2021)
Want to stay up-to-date on Vector’s work in AI trust and safety?
Join Vector’s mailing list here: