π€ AI Summary
This study addresses the misalignment between existing AI fairness evaluation tools and the Telecom Engineering Centre (TEC) of Indiaβs national standards, which hinders compliance auditing in high-risk applications. To bridge this gap, the work presents the first operational web-based auditing framework that implements TECβs AI assessment criteria. The framework integrates vectorized computation, reactive state management, survey-informed risk quantification, and standardized reporting to enable reproducible and auditable fairness evaluations. Validated on the COMPAS dataset, it effectively identifies attribute-specific biases and generates TEC-compliant fairness scores alongside certification-ready reports. This approach significantly narrows the implementation gap between academic fairness methodologies and localized AI governance requirements in the Indian regulatory context.
π Abstract
The growing reliance on Artificial Intelligence (AI) models in high-stakes decision-making systems, particularly within emerging telecom and 6G applications, underscores the urgent need for transparent and standardized fairness assessment frameworks. While global toolkits such as IBM AI Fairness 360 and Microsoft Fairlearn have advanced bias detection, they often lack alignment with region-specific regulatory requirements and national priorities. To address this gap, we propose Nishpaksh, an indigenous fairness evaluation tool that operationalizes the Telecommunication Engineering Centre (TEC) Standard for the Evaluation and Rating of Artificial Intelligence Systems. Nishpaksh integrates survey-based risk quantification, contextual threshold determination, and quantitative fairness evaluation into a unified, web-based dashboard. The tool employs vectorized computation, reactive state management, and certification-ready reporting to enable reproducible, audit-grade assessments, thereby addressing a critical post-standardization implementation need. Experimental validation on the COMPAS dataset demonstrates Nishpaksh's effectiveness in identifying attribute-specific bias and generating standardized fairness scores compliant with the TEC framework. The system bridges the gap between research-oriented fairness methodologies and regulatory AI governance in India, marking a significant step toward responsible and auditable AI deployment within critical infrastructure like telecommunications.