AI Safety Researcher | Applied Epistemologist
AI researcher with an MSc in computational neuroscience and a PhD in artificial neural networks interpretability and robustness. Developed multiple generative models used millions of times across different products (Lightricks). Since October 2024, I have dedicated myself to technical AI safety, aiming to apply my expertise in mechanistic interpretability, training, and evaluation of transformer-based systems to mitigate catastrophic risks and ensure beneficial AI development.
Hebrew University (Edmond & Lily Safra Center) | 2016 – 2021
Advisor: Prof. Yair Weiss
Thesis: Why do deep convolutional networks generalize poorly to small image transformations? (JMLR, 2019, 700+ citations). My research focused on neural network robustness, generalization failures, and the connection to adversarial examples.
Relevant Coursework: Advanced theoretical and practical ML, Computational neuroscience, Reinforcement learning, Bayesian inference, Graph theory, and Philosophy of mind.
Hebrew University (Edmond & Lily Safra Center) | 2014 – 2016
Hebrew University | 2011 – 2014
Outside of my research, I play chess at a 2000 Elo rating and practice Jiu-Jitsu, which I consider a form of physical chess, at a purple belt level. I enjoy baking sourdough bread and experimenting with fermentation in general. I love music, especially the classical violin repertoire and progressive rock and metal.
Email: aazuleye@gmail.com
Seeking full-time research roles in technical AI safety (interpretability, alignment, evaluations) in Paris, UK, US, or remote.