đ¤ AI Summary
Existing debiasing methods often induce model capability degradationâmanifesting as reduced factual accuracy, knowledge loss, or diminished output readabilityâposing a fundamental trade-off between capability and fairness, especially in small- and medium-scale models. This paper proposes a contrastive learningâbased alignment framework for large language models, the first to simultaneously suppress toxicity and enhance faithfulness. Methodologically, we introduce a dynamic loss scaling mechanism and explicit positiveânegative sample contrast to construct controllable, interpretable alignment objectives. Comprehensive evaluation across multiple model scales and diverse benchmarks demonstrates an average 38% reduction in toxicity and a 22% improvement in faithfulness. Critically, both core metrics exhibit consistent improvement across all benchmarksâa first-of-its-kind resultâeffectively eliminating the âalignment tax.â
đ Abstract
Current debiasing approaches often result a degradation in model capabilities such as factual accuracy and knowledge retention. Through systematic evaluation across multiple benchmarks, we demonstrate that existing debiasing methods face fundamental trade-offs, particularly in smaller models, leading to reduced truthfulness, knowledge loss, or unintelligible outputs. To address these limitations, we propose a contrastive learning framework that learns through carefully constructed positive and negative examples. Our approach introduces contrast computation and dynamic loss scaling to balance bias mitigation with faithfulness preservation. Experimental results across multiple model scales demonstrate that our method achieves substantial improvements in both toxicity reduction and faithfulness preservation. Most importantly, we show that our framework is the first to consistently improve both metrics simultaneously, avoiding the capability degradation characteristic of existing approaches. These results suggest that explicit modeling of both positive and negative examples through contrastive learning could be a promising direction for reducing the alignment tax in language model debiasing.