Researcher in the field of responsible and trustworthy artificial intelligence (AI)
Main research interests focus on integrating symbolic (human-interpretable) knowledge with deep neural networks (DNNs)
Explores how to extract, verify, integrate, and correct such knowledge in DNNs
Prefers explainable AI methods, particularly concept embedding analysis, to translate information in DNN structures and latent spaces into human-understandable and controllable forms
Primary application domains include safe and trustworthy perception modules for automated driving and robotics, planning tasks, and generative AI-based human-machine interfaces