The Simplest Neural Models, and a Hypothesis for Language in the Brain

December 7, 2023
9:30am EST
JCC 170
Speaker: Daniel Mitropolsky
Host: Vladimir Podolskii

Abstract

How do neurons, in their collective action, beget cognition, as well as intelligence and reasoning? As Richard Axel recently put it, we do not have a logic for the transformation of neural activity into thought and action; discerning this logic as the most important future direction of neuroscience. I will present a mathematical neural model of brain computation called NEMO, whose key ingredients are spiking neurons, random synapses and weights, local inhibition, and Hebbian plasticity (no backpropagation). Concepts are represented by interconnected co-firing assemblies of neurons that emerge organically from the dynamical system of its equations. We show that it is possible to carry out complex operations on these concept representations, such as copying, merging, completion from small subsets, and sequence memorization. I will present how to use NEMO to implement an efficient parser of a small but non-trivial subset of English (leading to a surprising new characterization of context-free languages), and a more recent model of the language organ in the baby brain that learns the meaning of words, and basic syntax, from whole sentences with grounded input. We will also touch on lower bounds in the model, and the idea of a fine-grained complexity theory of the brain.

Bio:

Daniel Mitropolsky is a PhD student at Columbia University, advised by Christos Papadimitriou and Tal Malkin. His main interest is the computational and mathematical theory of the brain, especially understanding the brain's algorithms behind language, and possible applications to AI. In complexity theory, he also works on the theory of total functions and their connection to cryptography. Before the PhD, Dan worked for Google, and completed his B.S. in Mathematics and in Computer Science at Yale.

FOOD WILL BE PROVIDED! Bagels, scones, coffee, and tea!