🤖 AI Summary
This paper addresses the systemic inequity arising from AI top-tier conferences’ “hard cap on total submissions per author” policy—a de facto rejection mechanism that disproportionately disadvantages early-career researchers and other underrepresented groups. We formally define the fairness dilemma under submission quotas and propose a dual-metric framework balancing individual fairness (NP-hard) and group fairness (efficiently solvable via linear programming). Through rigorous mathematical modeling, computational complexity analysis, and empirical validation on real-world data—including CVPR 2025 submission patterns—our mechanism significantly increases acceptance opportunities for junior scholars while preserving overall fairness and operational feasibility. To our knowledge, this is the first socially just optimization framework for conference review policies that simultaneously ensures theoretical rigor and practical deployability.
📝 Abstract
As AI research surges in both impact and volume, conferences have imposed submission limits to maintain paper quality and alleviate organizational pressure. In this work, we examine the fairness of desk-rejection systems under submission limits and reveal that existing practices can result in substantial inequities. Specifically, we formally define the paper submission limit problem and identify a critical dilemma: when the number of authors exceeds three, it becomes impossible to reject papers solely based on excessive submissions without negatively impacting innocent authors. Thus, this issue may unfairly affect early-career researchers, as their submissions may be penalized due to co-authors with significantly higher submission counts, while senior researchers with numerous papers face minimal consequences. To address this, we propose an optimization-based fairness-aware desk-rejection mechanism and formally define two fairness metrics: individual fairness and group fairness. We prove that optimizing individual fairness is NP-hard, whereas group fairness can be efficiently optimized via linear programming. Through case studies, we demonstrate that our proposed system ensures greater equity than existing methods, including those used in CVPR 2025, offering a more socially just approach to managing excessive submissions in AI conferences.