Information, Geometry, and Physics Seminar
How do brains represent the world? An emerging set of findings in neuroscience is beginning to illuminate a new paradigm for understanding the neural code. Across sensory and motor regions of the brain, neural structures are found to mirror the geometric structure of the world states they represent—either in their explicit anatomical arrangement or in the underlying low-dimensional manifolds traversed by their dynamics. This phenomenon can be observed in the circuit of neurons representing orientation in the fly, spatial position in place cells and grid cells in the rat, and changes in 3D orientation in human semicircular canals, among others. Such findings suggest that brains across species and sensory modalities have evolved a general computational strategy that leverages geometry, Bayesian inference, and dynamical systems to represent the structure of the world.
Can these geometric ideas be extended beyond representations of low-dimensional spaces, such as self position and orientation, to complex spaces such as visual scenes? Indeed, a basic understanding of visual representations in the primate visual cortical hierarchy remains elusive beyond V1. Likewise, a longstanding goal in unsupervised representation learning has been the discovery of low-dimensional, independent factors of variation latent in visual data. The goal of this talk is to introduce the emerging geometric paradigm in neuroscience and explore its implications for unsupervised learning of visual scene representations in brains and machines.