Subscribe to RSS
DOI: 10.1055/s-2004-832061
Representation of Person Identity Information in Auditory and Visual Cortex
Recent studies have revealed face-responsive visual areas and voice-responsive auditory areas in the human brain. Although these voice and face processing modules are anatomically segregated, voice and face information has been shown to interact on a behavioral level. How is person-specific multimodal information combined? Using functional magnetic resonance imaging, we have shown that recognizing familiar speakers' voices activates the fusiform face area (FFA) in normal as well as in a developmental prosopagnosic subject. Person recognition models suggest that such a cross-modal effect from auditory to visual association cortices should be accomplished via a supra-modal region, i.e., a region that responds to familiar faces and to familiar speakers' voices. Functional connectivity analyses (PPI), however, revealed that the FFA activation by familiar speakers' voices results from a direct interaction of voice and face processing modules. These findings will be discussed in the context of current multi-sensory person recognition models.