1. Anti-stereotypical Predictive Text Suggestions Do Not Reliably Yield Anti-stereotypical Writing, with Hal Daumé III, 2024
2. The Impact of Explanations on Fairness in Human-AI Decision Making: Protected vs Proxy Features, with Navita Goyal, Tin Nguyen, Hal Daumé III, IUI 2024
3. Which Examples Should be Multiply Annotated? Active Learning When Annotators May Disagree, with Anna Sotnikova, Hal Daumé III, Finding of ACL 2023
4. Recognition of They/Them as Singular Personal Pronouns in Coreference Resolution, with Rachel Rudinger, NAACL 2022
Research Experience
Current research focuses on exploring the tangible effects/harms of model biases on people and how to encourage more fair outcomes when making decisions with AI models.
Education
1. PhD in Computer Science, University of Maryland, College Park, Advisor: Prof. Hal Daumé III
2. Combined BS/MS in Computer Science (with a secondary major in Japanese Studies and a concentration in AI), Case Western Reserve University, Advisor: Prof. Soumya Ray
Background
Fourth year Ph.D. student at the University of Maryland, College Park in the Department of Computer Science, advised by Prof. Hal Daumé III. Current research focuses on fairness in NLP, human-AI interaction, and their intersection. More broadly, interested in fairness, trust/reliance, and interpretability in AI systems.