Proposed a modified maximum likelihood estimator for the Bradley-Terry model that improves fairness without sacrificing accuracy
Formulated a Knapsack-style combinatorial optimization problem for hiring/admissions with theoretical guarantees balancing quality and uncertainty
Designed algorithms robust to evaluator miscalibration (e.g., leniency or strictness), enhancing fairness in evaluations
Developed a data-dependent estimator to reduce bias in teaching evaluations caused by students’ personal grade outcomes
Built a sequential evaluation model for contexts like competitions and hiring, supported by theory and crowdsourcing experiments, with a novel ranking estimator