Understanding contextual modulation of visual responses through mechanistic and normative models

Date:2023-12-27

 

Time: 14:00-15:30 on Wed.,Jan.10, 2024

Venue:E109,Biomedicine Hall

Speaker: Dr. Mario Dipoppa

Host: Dr. Xiaoxuan Jia

Title: Understanding contextual modulation of visual responses through mechanistic and normative models.

 

 

Abstract:

The saliency of a stimulus is modulated by the spatio-temporal context in which it is presented—a phenomenon known as contextual modulation. Two outstanding questions are how contextual modulation emerges from the underlying neural circuits and what its computational benefits are to the visual system. We addressed these questions through mechanistic and normative models.

The first part of the talk focused on the neural underpinning of contextual modulation through a computational circuit model. Neural recordings in the mouse's primary visual cortex showed that contextual modulation has varying effects across different cell types. However, the intricacy of recurrent interactions in the circuit rendered these observations insufficient to provide a mechanistic explanation. Thus, we developed a data-driven recurrent model incorporating different cell types. Our model predicted that suppressing specific cell types would reduce contextual modulation in excitatory neurons, which we validated experimentally. Finally, our model supported the hypothesis that contextual modulation is regulated by a disinhibitory circuit, which acts paradoxically on a subset of cell types.

The second part of the talk focused on the computational advantages of adaptation, a temporal form of contextual modulation. To understand the impact of adaptation on the neural code, we investigated how the geometry of neural representations adapts to environments with different sensory statistics. Our experimental recordings showed better decoding of the adapted stimulus, even though responses of neurons tuned to the adaptor decreased. We adopted an efficient coding approach by training a neural network subject to biological constraints to represent stimuli under different statistics. The model’s results were consistent with those in the experiments and suggested that adaptation allows the brain to efficiently enhance the representation of frequently presented stimuli.

 

Biography:

Mario Dipoppa is an Assistant Professor in computational neuroscience in the Department of Neurobiology at UCLA. Dr. Dipoppa seeks to understand the neural mechanisms underlying cortical brain functions. He obtained his Ph.D. at Pierre and Marie Curie University where he developed neural circuit models underlying working memory, under the guidance of Boris Gutkin. He then joined as a postdoc the laboratory of Kenneth Harris and Matteo Carandini at University College London and was the recipient of the Marie Curie Fellowship. At UCL, Dr. Dipoppa combined large-scale neural recordings and computational models to dissect the mechanisms underlying processes of the mouse’s visual system. He then served as an Associate Research Scientist at the Center for Theoretical Neuroscience of Columbia University advised by Ken Miller. At Columbia University, he combined deep learning with dynamical systems methods to study fundamental properties of visual computations such as contextual modulation. Dr. Dipoppa’s theoretical and computational neuroscience laboratory at UCLA continues to investigate how neural networks and dynamics in the cerebral cortex give rise to neural computation. Despite the complexity of their operations, cortical circuits are stereotypical which likely underlie common functions. To discover the governing principles of these canonical circuits, Dr. Dipoppa’s laboratory combines state-of-the-art approaches, including biologically realistic neural networks, deep and recurrent artificial neural networks, and encoding and decoding models.