🤖 AI Summary
This study addresses systemic biases in peer review arising from authors’ demographic attributes—such as race and nationality—that disproportionately disadvantage scholars from underrepresented groups. To mitigate this, the authors propose Fair-PaperRec, a novel model that jointly optimizes academic quality and intersectional fairness through an end-to-end framework, eschewing heuristic approaches. The model employs a multilayer perceptron architecture coupled with a custom-designed fairness-aware loss function to enforce equitable treatment across intersecting identity dimensions. Experimental evaluations on submission data from SIGCHI, DIS, and IUI conferences demonstrate that Fair-PaperRec simultaneously improves overall utility by 3.16% and increases participation from disadvantaged groups by 42.03%, thereby substantiating that enhancing diversity need not compromise scholarly rigor.
📝 Abstract
Despite frequent double-blind review, demographic biases of authors still disadvantage the underrepresented groups. We present Fair-PaperRec, a MultiLayer Perceptron (MLP)-based model that addresses demographic disparities in post-review paper acceptance decisions while maintaining high-quality requirements. Our methodology penalizes demographic disparities while preserving quality through intersectional criteria (e.g., race, country) and a customized fairness loss, in contrast to heuristic approaches. Evaluations using conference data from ACM Special Interest Group on Computer-Human Interaction (SIGCHI), Designing Interactive Systems (DIS), and Intelligent User Interfaces (IUI) indicate a 42.03% increase in underrepresented group participation and a 3.16% improvement in overall utility, indicating that diversity promotion does not compromise academic rigor and supports equity-focused peer review solutions.