Date June 2, 2017
Speaker Alexander Kell MIT
Topic Using deep learning to model and to understand human auditory cortex
Abstract A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to real-world sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized a hierarchical neural network for speech and music recognition. The resulting network contained separate music and speech pathways following several shared processing stages, potentially replicating human cortical organization. The network, which performed both tasks as well as humans, exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that real-world tasks substantially constrain neural processing and behavior, and thus task optimization may provide a powerful approach for modeling sensory systems.