Blavatnik visiting fellow at the Computational and Biological Learning Lab, University of Cambridge
Publications
GitHub profile
Opinions
Curriculum Vitae
Contact email
I am an aspiring scientist in the field of computational neuroscience, interested in developing theories on the dynamics of learning and memory in the brain, with possible applications to machine learning. I have strong mathematical background and hands-on experience with advanced methods from statistical physics and computer science, applied to analyze problems and experimental results from neuroscience and machine learning.
Neural networks storing multiple discrete attractors, like the Hopfield model and its extensions, are canonical models of biological memory, but their dynamical stability could only be guaranteed under highly restrictive, non-biological conditions. In this project we derive a theory of the dynamical stability of memory patterns, embedded into a recorrent neural network as fixed-points. Surprisinly, we found a stability phase-transition which differs from the classical critical capacity. All memory patterns are stable below a critical memory load which depends on the statistics of memory neural activities, as well as the single-neuron activation function. Our analysis highlights the computational benefits of threshold-linear activation and sparse-like patterns.