link to homepage

Institute of Neuroscience and Medicine

Navigation and service

Talk by Prof. Bernd J. Kröger

Department of Phoniatrics, Pedaudiology and Communication Disorders, RWTH Aachen University, Germany

School of Computer Science and Technology, Tianjin University, China

27 Jun 2012 14:30
27 Jun 2012 15:30

Neurocomputational Model of Speech Production, Speech Perception, and Speech Acquisition

A modular computational model of speech processing has been developed, capable of simulating basic sensori­motor processes of speech processing, i.e. speech production and speech perception. The structure of the model can be divided into three unimodal neural state maps (motor plan, auditory, somatosensory), a hypermodal self-organizing neural map, and a bundle of synaptic connections between self-organizing map and state maps. The hypermodal self-organizing neural map associates sensory states (auditory and somatosensory states) and motor plan states for the most frequent syllables of a trained target language. While this self-organizing map and its synaptic connections towards the sensory and motor plan neural state maps can be interpreted as a part of long-term memory, the neural state maps are parts of working memory, because these syllabic sensory and motor states change during production or perception. Speech knowledge is fed to the model by simulating two basic phases of speech acquisition, i.e. babbling and imitation. This training leads to an adjustment of synaptic link weights between self-organizing map and state maps by using Hebbian learning. 

The model in its current version is capable of storing sensory and motor plan representations for the most frequent syllables of a trained target language. Furthermore the model is capable of giving insights in a basic speech perception phenomenon, i.e. categorical perception, and the model emphasizes the importance of non-language specific babbling training as a precursor to language-specific imitation training.