Scholar
Mirco Mutti
Google Scholar ID: GlLkJ9UAAAAJ
Technion
Machine Learning
Reinforcement Learning
Artificial Intelligence
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
409
H-index
10
i10-index
10
Publications
20
Co-authors
29
list available
Contact
Email
muttimirco@gmail.com
CV
Open ↗
Twitter
Open ↗
GitHub
Open ↗
Publications
9 items
Unsupervised Behavioral Compression: Learning Low-Dimensional Policy Manifolds through State-Occupancy Matching
2026
Cited
0
K-Myriad: Jump-starting reinforcement learning with unsupervised parallel agents
2026
Cited
0
Blindfolded Experts Generalize Better: Insights from Robotic Manipulation and Videogames
2025
Cited
0
From Parameters to Behavior: Unsupervised Compression of the Policy Space
2025
Cited
0
State Entropy Regularization for Robust Reinforcement Learning
2025
Cited
0
Enhancing Diversity in Parallel Agents: A Maximum State Entropy Exploration Story
2025
Cited
0
A Classification View on Meta Learning Bandits
2025
Cited
0
Towards Principled Multi-Agent Task Agnostic Exploration
2025
Cited
0
Load more
Resume (English only)
Academic Achievements
Paper 'Blindfolded experts generalize better' awarded Best Paper at EXAIT workshop, ICML 2025
Paper 'A theoretical framework for partially-observed reward states in RLHF' accepted at ICLR 2025
Preprint 'Reward compatibility: A framework for inverse RL' explores theoretical foundations of inverse RL
Paper 'How does inverse RL scale to large state spaces? A provably efficient approach' accepted at NeurIPS 2024
Paper 'The limits of pure exploration in POMDPs: When the observation entropy is enough' accepted at RLC conference
Four papers accepted at ICML 2024 on meta RL, inverse RL, geometric active exploration, and pure exploration in POMDPs
Paper 'A tale of sampling and estimation in discounted reinforcement learning' accepted with oral presentation at AISTATS 2023
Paper 'The importance of non-Markovianity in maximum state entropy exploration' received Outstanding Paper Award at ICML 2022
Served as co-program chair for EWRL 2022 in Milan
Invited as a 'Rising star in AI' speaker at KAUST
Research Experience
Postdoctoral researcher at Technion - Israel Institute of Technology since September 2023
Working in the Robot Learning Lab with Aviv Tamar
Research theme: 'Reinforcement learning from theory to practice'
Gave a talk on 'Unsupervised reinforcement learning' at VANDAL lab in Turin
Presented on '(Non)convex reinforcement learning' at ETH Zurich's LAS group and AI Center
Toured northern Italy (MaLGa, IIT Genova, Bocconi, University of Verona) to present recent RL work
Education
PhD from Politecnico di Milano
Advised by Marcello Restelli
Member of the Artificial Intelligence and Robotics Lab
Successfully defended PhD thesis in March 2023
PhD thesis received an honorable mention for best AI thesis by AIxIA (Italian Association for Artificial Intelligence)
Background
Research interests center on reinforcement learning (RL)
Current research focuses on generalization and meta RL
Previous work emphasized unsupervised RL and learning without rewards
Aims to advance theoretical understanding to enable real-world applications of RL
Research topics include partial observability, RL with general utilities, RLHF, imitation learning, and inverse RL
Co-authors
29 total
Marcello Restelli
Full Professor, Politecnico di Milano
Riccardo De Santi
ETH AI Center
Alberto Maria Metelli
Assistant Professor, Politecnico di Milano
Co-author 4
Riccardo Zamboni
Politecnico di Milano
Piersilvio De Bartolomeis
ETH Zürich
Filippo Lazzati
Ph.D. Student, Politecnico di Milano
Michael Bronstein
DeepMind Professor of AI, University of Oxford / Scientific Director, AITHYRA
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up