Three papers accepted at NeurIPS 2025 workshops: (1) Fine-Tuning Vision-Language Models for Multimodal Polymer Property Prediction; (2) CAGUL: Cross-Modal Attention Guided Unlearning in Vision-Language Models; (3) A Machine Learning Framework for Automated Computational Ethology Using Markerless Pose Estimation
Paper 'Soft Prompting for Unlearning in Large Language Models' accepted to NAACL-2025 main conference (Jan 2025)
Co-authored paper accepted at IJCNN’24 (Mar 2024)
Paper on large visual language models for medical imaging accepted at IEEE CHASE’24 (Feb 2024); awarded NSF Student Travel Award
Paper 'Robust Influence-based Training Methods for Noisy Brain MRI' accepted as full oral paper at PAKDD’24 (Jan 2024)
Paper 'HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks' accepted at ICDM 2023 (acceptance rate 9.37%)
Awarded the Reginald R. ‘Barney’ & Jameson A. Baxter and EECS Graduate Fellowship for 2024–2025
Serving as reviewer for WWW’25, PAKDD’25, and IJCNN’25
Background
Currently a Postdoctoral Fellow in Computer Science at the University of Arkansas
Research interests include Machine Learning and Deep Learning
Focuses on improving trustworthiness in AI/ML models, particularly addressing robustness, safety, privacy, and fairness in Large (Vision) Language Models
Explores resource-efficient approaches for adapting large models to new tasks
Develops AI foundation models for scientific domains such as biomedical engineering and materials science