Introducing FlexModel: Breakthrough Framework for Unveiling the Secrets of Large Generative AI Models

December 7, 2023

2023 Generative AI Insights Research Research 2023

AI and Interpretability: Vector’s AI Engineering team has released a new interpretability framework for generative models, providing researchers with rich tools to improve the safety and trustworthiness of these models.

By Mark Coastworth and Matthew Choi

The world of machine learning is witnessing the rise of mammoth neural networks with billions of parameters. These large language models (LLMs) have demonstrated incredible abilities, primarily due to their generalization and in-context learning capabilities. But this massive growth in model size brings with it a significant challenge: the increased hardware requirements for their training and deployment often requires distributed infrastructure, splitting the model across multiple graphics processing units (GPUs) or even multiple nodes.

Although many tools exist for model parallelization and distributed training, deeper interactions with these models, such as, retrieving intermediate information or editing, necessitate a strong grasp of distributed computing. This has been a roadblock for many machine learning researchers with limited distributed computing knowledge. As a result, these large models typically function inside a black box, making it hard to understand the reasons behind a given output in a manner that’s easily interpretable for humans.

What is FlexModel?

To solve for this problem, members of Vector’s AI Engineering team developed FlexModel, a software package designed to provide a user-friendly interface for interacting with large-scale models spread out over multi-GPU and multi-node setups.

Introduced in “FlexModel: A Framework for Interpretability of Distributed Large Language Models,” which was selected as a spotlight paper at NeurIPS 2023, Flexmodel accomplishes this by providing a common interface to wrap around large models regardless of how they’ve been distributed (Accelerate, FSDP, DeepSpeed, etc). Next, it Introduces the concept of HookFunctions that lets users interact with distributed model internals, both during forward and backward passes. It implements these mechanisms via a simple API that has been released as a Python library called FlexModel. By implementing this library into their projects, researchers can quickly and easily gain rich insights into why a model behaves a certain way.

How does it work?

The FlexModel library provides a new class as a main interface for user interactions. This FlexModel class inherits from the commonly-used PyTorch nn.Module class, allowing developers to easily interact with the wrapped model via the nn.Module API without any code changes.

A simple initialization example looks like this:

model = AutoModelForCausalLM.from_pretrained("model-name")
model = accelerator.prepare(model)
output_dict: Dist[str, Tensor] = {}
model = FlexModel(model, output_dict, data_parallel_size=accelerator.num_processes)

Once a FlexModel has been instantiated, users may define a collection of HookFunctions: a user-defined function to perform fine-grained operations at each individual layer of a neural network. The most common use case here is to perform activation retrieval, grabbing intermediate information from a model in order to understand how it comes to an output decision. Another use case is to edit this intermediate information, to see how different internal state can lead to different outputs.

FlexModel has two major design goals. It should be intuitive: applying the FlexModel wrapper to a PyTorch nn.Module should simply add features for model inspection to the target model. Unwrapping the model should produce the original model without side-effects. The HookFunction’s editing function should allow arbitrary code to be run on the activations. It is also designed to be scalable: FlexModel is agnostic to the number of GPUs or GPU nodes, the model architecture (e.g. LLaMA, Falcon, GPT), the model size, and the distribution strategy (e.g. DP, FSDP, TP, PP) or composition thereof.

What does this mean for the machine learning community?

FlexModel promises to democratize model interactions and bridge the gap between distributed and single-device model paradigms. This enables researchers who may not be experts in distributed computing to interact with and modify distributed models without diving deep into the complexities of distributed systems. 

As concerns about biases and fairness in AI have gained prominence, interpretability can help in detecting, understanding, and mitigating hidden biases in model decisions. Unraveling how these models arrive at decisions, how they’ve learned specific behaviors, and understanding their internal mechanics can give us insights into building more robust, trustworthy, and efficient AI systems.

Already many sectors like medicine, finance, and the legal system are regulating that models that make decisions impacting humans must be interpretable This ensures that decisions are made transparently. With tools like FlexModel, researchers can now engage in interpretability research without being burdened by the technical complexities of distributed computing.

Conclusion

Tools like FlexModel underscore the significance of making advanced AI research inclusive and universally approachable. By lowering the barriers to interpretability research in LLMs, FlexModel brings us a step closer to making state-of-the-art machine learning more accessible, interpretable, safe and trustworthy.

Related:

Women write on a white board. There is a man to he left looking at the board.
2024
Research
Research 2024

Vector researchers presenting more than 98 papers at NeurIPS 2024

2024
Research
Research 2024

Unlocking the Potential of Prompt-Tuning in Federated Learning

2024
Research
Research 2024

New multimodal dataset will help in the development of ethical AI systems