Published multiple papers, such as 'Flat Channels to Infinity in Neural Loss Landscapes' (accepted at NeurIPS); presented research at various academic conferences, including posters on RNN solution degeneracy and toy models of identifiability for neuroscience at the Bernstein conference.
Research Experience
Conducts research at the EPFL Laboratory of Computational Neuroscience, involving reverse engineering of network parameters, learning difficulty of weight structures, and manipulation and interpretation of network models.
Education
PhD: EPFL, supervised by Wulfram Gerstner and Johanni Brea.
Background
PhD student at the Laboratory of Computational Neuroscience at EPFL, focusing on understanding weight structures in neural networks. Research interests include identifiability, trainability, and interpretability.
Miscellany
Attended the MIT Brain, Minds and Machines summer school; personal interests include well-dressed photos.