FairSense-AI: Responsible AI Meets Sustainability

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the dual challenges of algorithmic bias and environmental unsustainability in AI systems. We propose FairSense-AI—the first multimodal AI framework unifying fairness governance and green computing. It synergistically integrates large language models (LLMs) and vision-language models (VLMs) to enable fine-grained, interpretable bias detection and scoring across text and image modalities. The framework incorporates an AI risk assessment module aligned with MIT/NIST standards and enhances energy efficiency via model pruning and mixed-precision computation. Its key innovation lies in unifying algorithmic fairness interventions—including automated bias mitigation recommendations—with carbon-aware computing within a single governance architecture. Empirical evaluation demonstrates significant improvements: cross-modal bias identification accuracy increases notably, inference energy consumption drops by up to 42%, and real-world deployments validate both deployability and intervention efficacy across diverse application scenarios.

Technology Category

Application Category

📝 Abstract
In this paper, we introduce FairSense-AI: a multimodal framework designed to detect and mitigate bias in both text and images. By leveraging Large Language Models (LLMs) and Vision-Language Models (VLMs), FairSense-AI uncovers subtle forms of prejudice or stereotyping that can appear in content, providing users with bias scores, explanatory highlights, and automated recommendations for fairness enhancements. In addition, FairSense-AI integrates an AI risk assessment component that aligns with frameworks like the MIT AI Risk Repository and NIST AI Risk Management Framework, enabling structured identification of ethical and safety concerns. The platform is optimized for energy efficiency via techniques such as model pruning and mixed-precision computation, thereby reducing its environmental footprint. Through a series of case studies and applications, we demonstrate how FairSense-AI promotes responsible AI use by addressing both the social dimension of fairness and the pressing need for sustainability in large-scale AI deployments. https://vectorinstitute.github.io/FairSense-AI, https://pypi.org/project/fair-sense-ai/ (Sustainability , Responsible AI , Large Language Models , Vision Language Models , Ethical AI , Green AI)
Problem

Research questions and friction points this paper is trying to address.

Detects and mitigates bias in text and images
Aligns with AI risk assessment frameworks for ethical concerns
Optimizes energy efficiency to reduce environmental footprint
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal framework detects and mitigates bias
Integrates AI risk assessment for ethical concerns
Optimizes energy efficiency via model pruning
🔎 Similar Papers
No similar papers found.
S
Shaina Raza
Vector Institute
M
M. S. Chettiar
Vector Institute
M
Matin Yousefabadi
Vector Institute
T
Tahniat Khan
Vector Institute
Marcelo Lotif
Marcelo Lotif
Senior Software Developer, Vector Institute
Machine LearningArtificial Intelligence