Subscribe to RSS
DOI: 10.1055/s-2004-831977
Audition-Based Higher Cognitive Functions
The mission of the Max Planck Institute for Human Cognitive and Brain Sciences is the description of the neural basis of those abilities that are central to human cognition. These are the ability to use language, to plan and execute complex actions, or to comprehend music. Here we will present parts of our research on human cognition, namely the neural basis of higher audition such as language and music comprehension. Based on converging findings from functional brain imaging (fMRI), EEG and MEG registration in healthy subjects and in patients with focal brain lesions, these processes can be described as follows. Spoken language comprehension requires the coordination of different subprocesses in time. After the initial acoustic analysis the system has to extract segmental information such as phonemes, syntactic elements and lexical-semantic elements as well as suprasegmental information such as accentuation and intonational phrases, i.e., prosody. According to the dynamic dual pathway model of auditory language comprehension, syntactic and semantic information are primarily processed in a left hemispheric temporo-frontal pathway including separate circuits for syntactic and semantic information whereas sentence level prosody is processed in a right hemispheric temporo-frontal pathway. The syntactic circuit involves the left superior temporal gyrus, the inferior frontal gyrus (pars opercularis) and the basal ganglia. The semantic circuit also recruits the temporal and inferior frontal areas which, however, are distinct from those subserving syntactic processes. The pathway for the processing of prosody appears to consist of partially homologue areas in the right hemisphere. The observed interaction between syntactic and prosodic information during auditory sentence comprehension is attributed to dynamic interactions between the two hemispheres. This can be demonstrated in patients with lesions in the corpus callosum. Interestingly, aspects of structure (syntax) and meaning (semantics) can also be identified in music. fMRI findings indicate that music processing recruits a neural network quite similar to that of language processing with a slight dominance of the right hemisphere. While I will lay out the dual pathway model and the syntactic processes there in particular, Sonja Kotz will discuss the neural basis of semantic processes, Anja Ischebeck that of prosodic processes and Stefan Kölsch the neural implementation of music comprehension.