Studied the interplay between deep generative models and the manifold hypothesis; revealed that maximum-likelihood lacks statistical consistency; elucidated why some generative models work better than others; explained why models sometimes assign higher likelihoods to out-of-distribution data than they do to training data; invented a new exponential family of distributions with practical applications in deep learning; improved FID metric for better evaluation of image generative models
Research Experience
Senior Machine Learning Research Scientist at Layer 6 AI, performing fundamental AI research and building and deploying machine learning models
Education
PhD: Statistics from Columbia University; BSc: Applied Mathematics from ITAM, Mexico City
Background
AI Researcher, focusing on deep learning, generative models, probabilistic methods, and manifold learning. Aims to bridge the gap between theory and practice by developing principled methods that are actually useful in the real world.