Paper on Pareto Testing accepted to ICLR; Paper on Calibrated Selective Classification accepted to TMLR; New paper on Confident Adaptive Language Modeling accepted as an oral at NeurIPS 2022; New preprint on a generalized approach to Conformal Risk Control; Paper on Conformal Prediction Sets with Limited False Positives accepted to ICML and presented at the DFUQ workshop.
Research Experience
Research Scientist at Google DeepMind; previously a research engineer at Meta AI Research.
Education
Ph.D. in Electrical Engineering and Computer Science from MIT, advised by Professor Regina Barzilay and also worked closely with Professor Tommi Jaakkola.
Background
Research interests include developing methods for efficient and reliable machine learning, particularly in deploying existing models for use in open-domain settings, mitigating the negative consequences of their errors, and making more efficient predictions at test time. Focused on developing rigorous tools for estimating uncertainty to prepare users to safely use deployed models in realistic situations where models inevitably make mistakes. Also interested in leveraging uncertainty estimation to make more efficient predictions.