A Truth Serum for Eliciting Self-Evaluations in Scientific Reviews

📅 2023-06-19
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
In large academic conferences, overlapping authorship—where authors and papers exhibit many-to-many affiliations—introduces bias in peer review. Method: This paper proposes an incentive-compatible author self-assessment mechanism. Its core innovation is the first use of authors’ ordinal rankings of their co-authored papers as a credible signal. We theoretically establish that “block partitioning + ordinal feedback” is necessary to incentivize truthful reporting, and design a near-linear-time greedy algorithm for efficient block construction. The approach integrates collaborative affiliation modeling, discrete optimization, isotonic regression under ordinal constraints, and game-theoretic equilibrium analysis. Results: We rigorously prove that honest reporting constitutes a Nash equilibrium. Empirical evaluation on synthetic and real conference review data demonstrates significant improvements in score calibration accuracy, validating both the theoretical soundness and practical efficacy of the mechanism.
📝 Abstract
This paper designs a simple, efficient and truthful mechanism to to elicit self-evaluations about items jointly owned by owners. A key application of this mechanism is to improve the peer review of large scientific conferences where a paper often has multiple authors and many authors have multiple papers. Our mechanism is designed to generate an entirely new source of review data truthfully elicited from paper owners, and can be used to augment the traditional approach of eliciting review data only from peer reviewers. Our approach starts by partitioning all submissions of a conference into disjoint blocks, each of which shares a common set of co-authors. We then elicit the ranking of the submissions from each author and employ isotonic regression to produce adjusted review scores that align with both the reported ranking and the raw review scores. Under certain conditions, truth-telling by all authors is a Nash equilibrium for any valid partition of the overlapping ownership sets. We prove that to ensure truthfulness for such isotonic regression based mechanisms, partitioning the authors into blocks and eliciting only ranking information independently from each block is necessary. This leave the optimization of block partition as the only room for maximizing the estimation efficiency of our mechanism, which is a computationally intractable optimization problem in general. Fortunately, we develop a nearly linear-time greedy algorithm that provably finds a performant partition with appealing robust approximation guarantees. Extensive experiments on both synthetic data and real-world conference review data demonstrate the effectiveness of this owner-assisted calibration mechanism.
Problem

Research questions and friction points this paper is trying to address.

Elicit truthful self-evaluations to improve noisy peer review scores
Design efficient mechanism for overlapping owner-item relationships
Optimize block partitioning to ensure truthful ranking and calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Partition owner-item relations into disjoint blocks
Use isotonic regression for score adjustment
Develop greedy algorithm for block partition