Dissecting Submission Limit in Desk-Rejections: A Mathematical Analysis of Fairness in AI Conference Policies

📅 2025-02-02
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the systemic inequity arising from AI top-tier conferences’ “hard cap on total submissions per author” policy—a de facto rejection mechanism that disproportionately disadvantages early-career researchers and other underrepresented groups. We formally define the fairness dilemma under submission quotas and propose a dual-metric framework balancing individual fairness (NP-hard) and group fairness (efficiently solvable via linear programming). Through rigorous mathematical modeling, computational complexity analysis, and empirical validation on real-world data—including CVPR 2025 submission patterns—our mechanism significantly increases acceptance opportunities for junior scholars while preserving overall fairness and operational feasibility. To our knowledge, this is the first socially just optimization framework for conference review policies that simultaneously ensures theoretical rigor and practical deployability.

Technology Category

Application Category

📝 Abstract
As AI research surges in both impact and volume, conferences have imposed submission limits to maintain paper quality and alleviate organizational pressure. In this work, we examine the fairness of desk-rejection systems under submission limits and reveal that existing practices can result in substantial inequities. Specifically, we formally define the paper submission limit problem and identify a critical dilemma: when the number of authors exceeds three, it becomes impossible to reject papers solely based on excessive submissions without negatively impacting innocent authors. Thus, this issue may unfairly affect early-career researchers, as their submissions may be penalized due to co-authors with significantly higher submission counts, while senior researchers with numerous papers face minimal consequences. To address this, we propose an optimization-based fairness-aware desk-rejection mechanism and formally define two fairness metrics: individual fairness and group fairness. We prove that optimizing individual fairness is NP-hard, whereas group fairness can be efficiently optimized via linear programming. Through case studies, we demonstrate that our proposed system ensures greater equity than existing methods, including those used in CVPR 2025, offering a more socially just approach to managing excessive submissions in AI conferences.
Problem

Research questions and friction points this paper is trying to address.

AI conference fairness
paper submission limits
impact on junior researchers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fair Decision-making System
AI Conference Review
Early Career Researchers Fairness
🔎 Similar Papers
No similar papers found.
Y
Yuefan Cao
Zhejiang University
X
Xiaoyu Li
University of New South Wales
Yingyu Liang
Yingyu Liang
The University of Hong Kong
machine learning
Zhizhou Sha
Zhizhou Sha
Tsinghua University
Generative Models
Zhenmei Shi
Zhenmei Shi
Senior Research Scientist at MongoDB + Voyage AI; PhD from University of Wisconsin–Madison
Deep LearningMachine LearningArtificial Intelligence
Z
Zhao Song
The Simons Institute for the Theory of Computing at UC Berkeley
J
Jiahao Zhang
Independent Researcher