Nishpaksh: TEC Standard-Compliant Framework for Fairness Auditing and Certification of AI Models

πŸ“… 2026-01-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the misalignment between existing AI fairness evaluation tools and the Telecom Engineering Centre (TEC) of India’s national standards, which hinders compliance auditing in high-risk applications. To bridge this gap, the work presents the first operational web-based auditing framework that implements TEC’s AI assessment criteria. The framework integrates vectorized computation, reactive state management, survey-informed risk quantification, and standardized reporting to enable reproducible and auditable fairness evaluations. Validated on the COMPAS dataset, it effectively identifies attribute-specific biases and generates TEC-compliant fairness scores alongside certification-ready reports. This approach significantly narrows the implementation gap between academic fairness methodologies and localized AI governance requirements in the Indian regulatory context.

Technology Category

Application Category

πŸ“ Abstract
The growing reliance on Artificial Intelligence (AI) models in high-stakes decision-making systems, particularly within emerging telecom and 6G applications, underscores the urgent need for transparent and standardized fairness assessment frameworks. While global toolkits such as IBM AI Fairness 360 and Microsoft Fairlearn have advanced bias detection, they often lack alignment with region-specific regulatory requirements and national priorities. To address this gap, we propose Nishpaksh, an indigenous fairness evaluation tool that operationalizes the Telecommunication Engineering Centre (TEC) Standard for the Evaluation and Rating of Artificial Intelligence Systems. Nishpaksh integrates survey-based risk quantification, contextual threshold determination, and quantitative fairness evaluation into a unified, web-based dashboard. The tool employs vectorized computation, reactive state management, and certification-ready reporting to enable reproducible, audit-grade assessments, thereby addressing a critical post-standardization implementation need. Experimental validation on the COMPAS dataset demonstrates Nishpaksh's effectiveness in identifying attribute-specific bias and generating standardized fairness scores compliant with the TEC framework. The system bridges the gap between research-oriented fairness methodologies and regulatory AI governance in India, marking a significant step toward responsible and auditable AI deployment within critical infrastructure like telecommunications.
Problem

Research questions and friction points this paper is trying to address.

AI fairness
regulatory compliance
TEC standard
bias auditing
telecom AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

TEC Standard
Fairness Auditing
Vectorized Computation
Regulatory AI Governance
Certification-Ready Reporting
πŸ”Ž Similar Papers
No similar papers found.
S
Shashank Prakash
IIIT Delhi, New Delhi
Ranjitha Prasad
Ranjitha Prasad
IIIT Delhi
Federated LearningBayesian InferenceStatistical estimationCausal Inference
A
Avinash Agarwal
DOT, Govt. of India, New Delhi