Currently a postdoctoral researcher at the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS), Rutgers University, hosted by David Pennock and Lirong Xia.
Research focuses on designing theoretically robust evaluation metrics to incentivize high-effort human feedback, evaluate data quality, and supervise AI systems.
Earlier work centers on peer prediction, aiming to design scoring and reward mechanisms that elicit honest, high-effort information without ground truth.
Recent work examines how generative AI reshapes data collection, investigating methods to detect and discourage LLM-generated or low-effort feedback and safely use noisy or misaligned AI feedback in downstream decision-making.
One of the main organizers of the annual Workshop on Incentives in Academia (WINA).