FairSense: Integrating Responsible AI and Sustainability

January 21, 2025

2025 AI Engineering Research Research 2025

Authors: Shaina Raza, Mark Coatsworth, Tahniat Khan, and Marcelo Lotif

A new AI-driven platform extends bias detection to include text and visual content, while leveraging energy-efficient AI frameworks. Developed by Shaina Raza, an Applied ML Scientist in Responsible AI, and Vector’s AI Engineering team, FairSense-AI balances energy efficiency and bias safety.

With data centers accounting for up to 2% of global electricity usage, concerns about GenAI’s environmental sustainability are rising alongside existing challenges around bias and misinformation. FairSense-AI leverages energy-efficient AI frameworks while providing an AI-backed framework to identify bias in multi-modal settings and an AI-driven risk management tool, providing users with a structured approach to identifying, assessing, and mitigating AI-related risks. A Python package allows programmers to easily integrate FairSense-AI into software code.

Fairsense-AI analyzes text for bias, highlighting problematic terms and providing insights into stereotypes. The tool demonstrates how AI can promote fairness and equity in language analysis

What Does it Do?

Building on UnBias, a previous bias neutralization tool developed by Vector, FairSense-AI identifies subtle patterns of prejudice, stereotyping, or favoritism to enhance fairness and inclusivity in digital content (text and images). Additionally, FairSense-AI leverages large language models (LLMs) and large vision models (VLMs) that are optimized for energy efficiency, minimizing its environmental impact.

Optimization techniques reduced emissions to just 0.012 kg CO2, demonstrating that responsible AI practices can be both environmentally impactful and cost-effective in training LLMs

The tool’s reduced environmental impact can be seen when comparing the carbon emissions from Llama 3.2 1B (one of the foundational models integrated into it) before and after optimization and fine-tuning. Emissions were reduced from 107,000 kg to just 0.012 kg per hour of inference, highlighting how green AI goals can be achieved without compromising on functionality or flexibility. The CodeCarbon software package was used to assess the environmental impact of code execution. The tool tracks electricity consumption during computation and converts it into carbon emissions based on the geographical location of the processing. Carbon emissions were measured in kilograms (kg).

How Does It Work?

FairSense-AI collects text and image data from various sources and then uses LLMs and VLMs to detect subtle patterns of bias. It assigns a score based on the severity of the bias and offers recommendations for more fair and inclusive content. Throughout the process, FairSense-AI incorporates energy-efficient optimization techniques to align responsible AI with sustainability goals, leveraging local resources and free tools such as Kiln.

Fairsense-AI can analyze visual bias, highlighting systemic gender inequality in opportunities and resources

Fairsense Framework

  • Data Preprocessing: collects and standardizes text and image data.
  • Model Analysis: uses LLMs/LVLMs to detect content imbalances.
  • Bias Scoring: quantifies and highlights bias severity.
  • Recommendations: provides strategies for bias reduction.
  • Risk Identification: identifies AI risks for informed decisions.
  • Sustainability: optimizes processes for eco-conscious bias mitigation.

The science behind Fairsense’s optimization lies in leveraging advanced techniques including model pruning, mixed-precision training, and fine-tuning, to reduce model complexity while preserving performance. By selectively removing less critical parameters, switching to efficient numerical representations, and carefully refining pre-trained models, Fairsense significantly lowers computational demands and energy consumption. This streamlined approach not only maintains high accuracy and nuanced bias detection and risk identification, but also aligns with sustainability goals by minimizing the carbon footprint, 

Moving forward, Vector researchers hope to add an AI risk management component that can identify AI risks, such as disinformation, misinformation, or linguistic and visual bias, based on queries. This risk management framework, designed by Tahniat Khan, will draw on the MIT Risk Repository and the NIST Risk Management Framework, aligning with widely recognized best practices for effective AI risk management.

Conclusion

Technology can be both transformational and ethical; while generative AI is a powerful tool, that also introduces a new set of risks. FairSense-AI sets a new standard for responsible AI innovation by making bias detection and risk identification accessible to both technical and non-technical audiences while maintaining a focus on energy efficiency. It is possible to prioritize responsible AI practices that benefit society and the planet without sacrificing innovations. With solutions like this we can harness AI’s potential while ensuring a more equitable and sustainable future for all.

Related:

2025

Responsible AI in Action: How Vector Institute Partnerships Drive Ethical Innovation

2025
Research
Research 2025

Thought Cloning: Teaching AI to Think Like Humans for Better Decision-Making

2025
Insights
Research 2025

Recommender Systems: Where Academia Meets Industry