From Bias to Balance: Fairness-Aware Paper Recommendation for Equitable Peer Review

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the persistent bias in double-blind peer review, where papers from underrepresented demographic groups—such as those defined by race or nationality—are systematically undervalued despite anonymization. To mitigate this inequity, the authors propose Fair-PaperRec, a post-review re-ranking framework that integrates a differentiable fairness-aware loss function conditioned on intersectional attributes. Leveraging a multilayer perceptron, the method reorders submissions to enhance equity while preserving overall quality. Evaluated on both synthetic data and real-world conference datasets (SIGCHI, DIS, IUI), Fair-PaperRec demonstrates a significant improvement in acceptance rates for underrepresented groups—up to 42.03%—with a minimal utility cost of no more than 3.16% reduction in aggregate ranking quality. The approach thus achieves notable fairness gains alongside mild quality regularization.

Technology Category

Application Category

📝 Abstract
Despite frequent double-blind review, systemic biases related to author demographics still disadvantage underrepresented groups. We start from a simple hypothesis: if a post-review recommender is trained with an explicit fairness regularizer, it should increase inclusion without degrading quality. To test this, we introduce Fair-PaperRec, a Multi-Layer Perceptron (MLP) with a differentiable fairness loss over intersectional attributes (e.g., race, country) that re-ranks papers after double-blind review. We first probe the hypothesis on synthetic datasets spanning high, moderate, and near-fair biases. Across multiple randomized runs, these controlled studies map where increasing the fairness weight strengthens macro/micro diversity while keeping utility approximately stable, demonstrating robustness and adaptability under varying disparity levels. We then carry the hypothesis into the original setting, conference data from ACM Special Interest Group on Computer-Human Interaction (SIGCHI), Designing Interactive Systems (DIS), and Intelligent User Interfaces (IUI). In this real-world scenario, an appropriately tuned configuration of Fair-PaperRec achieves up to a 42.03% increase in underrepresented-group participation with at most a 3.16% change in overall utility relative to the historical selection. Taken together, the synthetic-to-original progression shows that fairness regularization can act as both an equity mechanism and a mild quality regularizer, especially in highly biased regimes. By first analyzing the behavior of the fairness parameters under controlled conditions and then validating them on real submissions, Fair-PaperRec offers a practical, equity-focused framework for post-review paper selection that preserves, and in some settings can even enhance, measured scholarly quality.
Problem

Research questions and friction points this paper is trying to address.

bias
fairness
peer review
underrepresented groups
equity
Innovation

Methods, ideas, or system contributions that make the work stand out.

fairness-aware recommendation
intersectional fairness
paper recommender system
double-blind peer review
fairness regularization
🔎 Similar Papers
No similar papers found.
U
Uttamasha Anjally Oyshi
Department of Electrical Engineering & Computer Science, University of Arkansas, Fayetteville, USA
Susan Gauch
Susan Gauch
Professor, Computer Science and Engineering, University of Arkansas
Information RetrievalOntologiesPersonalization