Recent projects include: understanding and steering the behavior of deep nets through their individual neurons; understanding the principles of modular architectures to facilitate data efficiency. Published multiple papers on topics such as generalization capabilities of deep learning models, modular architectures, and invariant neurons.
Research Experience
Currently working as a research scientist at MIT. Leads a group committed to improving transparency, reproducibility, and integrity of research, with a conscious commitment to equity and justice.
Education
Ph.D. from ETH Zurich (2014) in computer vision; received training in machine learning and neuroscience as a postdoc at MIT in Sinha's lab and Poggio's lab, as well as at the NSF Center for Brains, Minds, and Machines; completed a postdoc at the National University of Singapore (2015).
Background
Research interests include the (neuro)science of deep learning, particularly addressing the lack of interpretability, data inefficiency, poor robustness, and generalization outside the training distribution in deep learning. Studies deep learning from a neuroscientist's perspective by formulating and testing hypotheses.
Miscellany
Holds core values in improving the transparency, reproducibility, and integrity of scientific research.