Standardized protocols are key to the responsible deployment of language models

May 3, 2024

2024 Insights Large Language Models Research Research 2024

By Ian Gormely

There is a pressing need for standardized protocols for language models (LMs) in order for them to be responsibly deployed in real-world scenarios. That was the consensus opinion of a panel of experts at the Responsible Language Models (ReLM) workshop. The event was held during this year’s Association for the Advancement of Artificial Intelligence (AAAI) conference in Vancouver. 

The day-long workshop, which Vector helped organize, was focused on the responsible development, implementation, and applications of LMs, including the large language models (LLMs) that power chatbots like ChatGPT. The workshop provided valuable insights into the ethical creation and use of LMs, addressing critical issues like bias mitigation and transparency and underscored the importance of establishing robust guidelines for the ethical implementation of these technologies.

The panel, “Bridging the Gap: Responsible Language Model Deployment in Industry and Academia,” featured Antoaneta Vladimirova, Applied Medical AI Lead at Roche; Donny Cheung, Healthcare and Life Sciences AI Lead at Google Cloud; Emre Kiciman, Senior Principal Research Manager at Microsoft Research; Eric Jiawei He, Machine Learning Research Team Lead at Borealis AI; and Jiliang Tang, University Foundation Professor in the Department of Computer Science and Engineering at Michigan State University. It was moderated by Peter Lewis from Ontario Tech University.

The panelists emphasized that the growing reliance on LMs for various applications highlights the need for standardized protocols. Without them, LM deployment could lead to unintended consequences that could undermine public trust in AI technologies.

How misinformation spreads online

Filippo Menczer, Luddy Distinguished Professor of Informatics and Computer Science at Indiana University, delivered the keynote, “AI and Social Media Manipulation: The Good, the Bad, and the Ugly.” It provided a deep dive into the dynamics of how information and misinformation proliferate across social networks.

Menczer discussed sophisticated analytical and modeling techniques that help us understand the patterns through which both true and false information spreads. He also introduced various AI-powered tools designed to combat the spread of misinformation. He emphasized that while AI offers innovative solutions for detecting and countering disinformation, these technologies also bring potential risks. The capabilities that allow for the identification and mitigation of false information can similarly be misused to enhance the effectiveness of such information. He highlighted the dual-edged nature of AI in this context, pointing out that the same tools that can help safeguard our information ecosystem can also complicate it, and bring challenges to its integrity. Menczer’s insights shed light on the critical balance needed in developing AI tools that are effective against misuse, underscoring the importance of unintended consequences in the deployment of AI technologies.

Lack of transparency hinders replication

Among the six invited speakers at the workshop was Vector Faculty Member Frank Rudzicz, who delved into the challenges of reproducibility in language model development during his presentation, “Quis custodiet ipsos custodes?” He highlighted the challenges posed by the prevailing methods of developing language models to ensure their reliability and predictability. Rudzicz elaborated that this lack of transparency and consistency can hinder scientific validation and replication. He stressed the importance of adopting more robust and open practices to mitigate these issues, advocating for greater accountability and standardization in the field to ensure that language models are both effective and trustworthy. His insights contributed to a broader discussion on the need for ethical standards and rigorous methodologies in advancing AI.

From the 40 papers submitted to the workshop, 21 were accepted, including six spotlight presentations and 15 posters. “Breaking Free Transformer Mod els: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs” won the workshop’s Best Paper award while “Inverse Prompt Engineering for Safety in Large Language Models,” was a runner-up.

Related:

2024
Research 2024

Unlocking the Potential of Prompt-Tuning in Federated Learning

2024
AI Talent

Navigating the AI Talent Landscape: How Vector Institute Partnerships Address the Skills Gap

2024
AI Talent

Canadian AI job market shifting, favouring specialized, in-demand skills