A biologically plausible model of speech perception
We describe a biologically-inspired model of speech perception based on the principles of interactive processing and dynamical systems. Previous attempts at modeling the neuro-cognitive mechanisms underlying word processing have used connectionist approaches, but none has modeled spoken word architectures as the input is presented in real-time. Hence, such models rely on the ingenuity of the modeler to establish a mapping of real-time stimulus to the model’s input which may not preserve processing that happens during each time step. We present a neural ﬁeld model which successfully replicates the effect of immediate auditory repetition of monosyllabic words and ﬁts it to a component of a well-studied mechanism for analyzing language processing, the event-related potential (ERP). This represents a new modeling approach to studying the neuro- cognitive processes, one that is based on the bottom-up interaction of real-time sensory information with higher-level categories of cognitive processing.