🤖 AI Summary
To address the critical issue of severe class-wise accuracy imbalance—despite high overall accuracy—in few-shot text classification with large language models (LLMs), this paper proposes a post-hoc debiasing method based on nonlinear integer programming. The method jointly optimizes class-level weights and sample-level membership assignments, marking the first application of integer programming to LLM post-processing. It establishes a unified class- and sample-level debiasing framework and provides theoretical proof that sample-level correction is indispensable for improving performance on underrepresented classes. Evaluated on seven general-purpose text classification benchmarks, the method significantly boosts overall accuracy of Llama-2-13B while substantially enhancing class fairness. Moreover, it delivers substantial gains on biomedical few-shot tasks using Llama-2-70B.
📝 Abstract
Language models are strong few-shot learners and achieve good overall accuracy in text classification tasks, masking the fact that their results suffer from great class accuracy imbalance. We believe that the pursuit of overall accuracy should not come from enriching the strong classes, but from raising up the weak ones. To address the imbalance, we propose a post-hoc nonlinear integer programming based debiasing method that ensembles weight correction and membership correction to enable flexible rectifications of class probabilities at both class and sample levels, enhancing the performance of LLMs directly from their outputs. Evaluations with Llama-2-13B on seven text classification benchmarks show that our approach achieves state-of-the-art overall accuracy gains with balanced class accuracies. The resulted probability correction scheme demonstrates that sample-level corrections are necessary to elevate weak classes. In addition, due to effectively correcting weak classes, our method also brings significant performance gains to Llama-2-70B, especially on a biomedical domain task, demonstrating its effectiveness across both small and large model variants.