- A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation, NeurIPS (2023)
- Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization, NeurIPS (2023)
- CRAFT: Concept recursive activation factorization for explainability, CVPR (2023)
- Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis, CVPR (2023)
- What I cannot predict, I do not understand: A human-centered evaluation framework for explainability methods, NeurIPS (2022)
Research Experience
Building next-gen robots at Hugging Face; Previously a research scientist at Tesla, working on Autopilot and Optimus
Education
PhD: Sorbonne; Postdoctoral studies: Brown University
Background
Scientific interest: Understanding the underlying mechanisms of intelligence; Research focus: Learning human behaviors with neural networks; Working on: Novel architectures, learning approaches, theoretical frameworks, and explainability methods; Other interests: Contributing to open-source projects and reading about neuroscience.
Miscellany
Personal interests: Contributing to open-source projects and reading about neuroscience