FairSense: Integrating Responsible AI and Sustainability

January 21, 2025

AI Engineering

Authors: Shaina Raza, Mark Coatsworth, Tahniat Khan, and Marcelo Lotif

A new AI-driven platform extends bias detection to include text and visual content, while leveraging energy-efficient AI frameworks. Developed by Shaina Raza, an Applied ML Scientist in Responsible AI, and Vector’s AI Engineering team, FairSense-AI balances energy efficiency and bias safety.

With data centres accounting for up to 2% of global electricity usage, concerns about GenAI’s environmental sustainability are rising alongside existing challenges around bias and misinformation. FairSense-AI leverages energy-efficient AI frameworks while providing an AI-backed framework to identify bias in multi-modal settings and an AI-driven risk management tool, providing users with a structured approach to identifying, assessing, and mitigating AI-related risks. A Python package allows programmers to easily integrate FairSense-AI into software code.


A screenshot of the Fairsense-AI web platform interface. The header displays the Fairsense-AI logo with decorative icons. A description beneath states the platform is an AI-driven tool for analyzing bias in textual and visual content, designed to promote transparency, fairness and equity in AI systems. Four key features are listed: Text Analysis for detecting biases and highlighting problematic terms; Image Analysis for evaluating images for embedded text and captions; Batch Processing for analyzing large datasets; and AI Governance for insights into ethical AI practices. Seven navigation tabs are visible across the top: Text Analysis, Image Analysis, Batch Test CSV Analysis, Batch Image Analysis, AI Governance and Safety, AI Safety Risks Dashboard and About Fairsense-AI. The Text Analysis tab is active, showing a text input field with a 'Use Summarizer?' checkbox and an Analyze button. An example input reads 'Some people say that women are not suitable for leadership roles.' Below the input, two further example statements are shown for comparison. At the bottom of the interface, a section titled 'Analysis of bias and the target group' displays a text output identifying the tone as critical and condescending, noting the targeted group appears to be women. The caption below the screenshot states that Fairsense-AI analyzes text for bias, highlighting problematic terms and providing insights into stereotypes.

Fairsense-AI analyzes text for bias, highlighting problematic terms and providing insights into stereotypes. The tool demonstrates how AI can promote fairness and equity in language analysis

What Does it Do?

Building on UnBias, a previous bias neutralization tool developed by Vector, FairSense-AI identifies subtle patterns of prejudice, stereotyping, or favoritism to enhance fairness and inclusivity in digital content (text and images). Additionally, FairSense-AI leverages large language models (LLMs) and large vision models (VLMs) that are optimized for energy efficiency, minimizing its environmental impact.

A bar chart titled 'Carbon Emissions Reduction in Llama 3.2 1B (kg)' comparing CO₂ emissions before and after optimization. The y-axis shows CO₂ emissions in kilograms on a logarithmic scale ranging from 10⁻² to 10⁵. The x-axis shows two optimization stages. The first bar, labelled 'Before Optimization', is red and reaches approximately 10⁵, with an annotation of 107,000.000 kg. The second bar, labelled 'After Optimization', is green and sits just above 10⁻², with an annotation of 0.012 kg. The logarithmic scale highlights the dramatic scale of the reduction – over nine orders of magnitude – between the two stages. The caption states that optimization techniques reduced emissions to just 0.012 kg CO₂, demonstrating that responsible AI practices can be both environmentally impactful and cost-effective in training LLMs

Optimization techniques reduced emissions to just 0.012 kg CO2, demonstrating that responsible AI practices can be both environmentally impactful and cost-effective in training LLMs

The tool’s reduced environmental impact can be seen when comparing the carbon emissions from Llama 3.2 1B (one of the foundational models integrated into it) before and after optimization and fine-tuning. Emissions were reduced from 107,000 kg to just 0.012 kg per hour of inference, highlighting how green AI goals can be achieved without compromising on functionality or flexibility. The CodeCarbon software package was used to assess the environmental impact of code execution. The tool tracks electricity consumption during computation and converts it into carbon emissions based on the geographical location of the processing. Carbon emissions were measured in kilograms (kg).

How Does It Work?

FairSense-AI collects text and image data from various sources and then uses LLMs and VLMs to detect subtle patterns of bias. It assigns a score based on the severity of the bias and offers recommendations for more fair and inclusive content. Throughout the process, FairSense-AI incorporates energy-efficient optimization techniques to align responsible AI with sustainability goals, leveraging local resources and free tools such as Kiln.

A screenshot of the Fairsense-AI web platform interface with the Image Analysis tab active. The platform header and feature list are visible at the top, along with all seven navigation tabs. The Image Analysis workspace shows an uploaded image on the left depicting two human figures: one standing on a tall red platform with a ladder and another standing on a much lower block without support, visually representing unequal access to resources. On the right, an analysis output box highlighted in orange displays the text: 'The image highlights gender inequality, portraying a man on a higher platform with a ladder, symbolizing access to resources and opportunities, while the woman stands on a lower block without support, emphasizing the systemic disparities women often face.' The caption below the screenshot states that Fairsense-AI can analyze visual bias, highlighting systemic gender inequality in opportunities and resources.

Fairsense-AI can analyze visual bias, highlighting systemic gender inequality in opportunities and resources

Fairsense Framework

  • Data Preprocessing: collects and standardizes text and image data.
  • Model Analysis: uses LLMs/LVLMs to detect content imbalances.
  • Bias Scoring: quantifies and highlights bias severity.
  • Recommendations: provides strategies for bias reduction.
  • Risk Identification: identifies AI risks for informed decisions.
  • Sustainability: optimizes processes for eco-conscious bias mitigation.

The science behind Fairsense’s optimization lies in leveraging advanced techniques including model pruning, mixed-precision training, and fine-tuning, to reduce model complexity while preserving performance. By selectively removing less critical parameters, switching to efficient numerical representations, and carefully refining pre-trained models, Fairsense significantly lowers computational demands and energy consumption. This streamlined approach not only maintains high accuracy and nuanced bias detection and risk identification, but also aligns with sustainability goals by minimizing the carbon footprint, 

Moving forward, Vector researchers hope to add an AI risk management component that can identify AI risks, such as disinformation, misinformation, or linguistic and visual bias, based on queries. This risk management framework, designed by Tahniat Khan, will draw on the MIT Risk Repository and the NIST Risk Management Framework, aligning with widely recognized best practices for effective AI risk management.

Conclusion

Technology can be both transformational and ethical; while generative AI is a powerful tool, that also introduces a new set of risks. FairSense-AI sets a new standard for responsible AI innovation by making bias detection and risk identification accessible to both technical and non-technical audiences while maintaining a focus on energy efficiency. It is possible to prioritize responsible AI practices that benefit society and the planet without sacrificing innovations. With solutions like this we can harness AI’s potential while ensuring a more equitable and sustainable future for all.