Roman Lutz
Scholar

Roman Lutz

Google Scholar ID: joujH5UAAAAJ
Responsible AI Engineer at Microsoft
Responsible AIAI Red Teaming
Citations & Impact
All-time
Citations
870
 
H-index
8
 
i10-index
7
 
Publications
11
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • January 2025: Published a whitepaper titled 'Lessons Learned from Red Teaming 100 Generative AI Products'
  • July 2024: PyRIT was a key part of Phi-3 Safety Post-Training
  • May 2024: Attended the Microsoft //Build conference to talk about PyRIT
  • February 2024: Released PyRIT and expanded its capabilities for multimodal generative AI systems
  • October 2023: Published a paper on ArXiv titled 'A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications'
  • August 2023: Fairlearn paper published in the Journal for Machine Learning Research (Open Source Software section)
  • December 2021: Released the Responsible AI dashboard
Background
  • Responsible AI Engineer at Microsoft and open source maintainer. Focuses on identifying vulnerabilities of generative AI systems in terms of AI safety and security.
Miscellany
  • Interests include open source project maintenance and technical sharing
Co-authors
0 total
Co-authors: 0 (list not available)