Tensor Decompositions for Learning Latent Variable Models
In many applications, we face the challenge of modeling the interactions between multiple observations. A popular and successful approach in machine learning and AI is to hypothesize the existence of certain latent (or hidden) causes which help to explain the correlations in the observed data. The (unsupervised) learning problem is to accurately estimate a model with only samples of the observed variables. For example, in document modeling, we may wish to characterize the correlational structure of the "bag of words" in documents. Here, a standard model is to posit that documents are about a few topics (the hidden variables) and that each active topic determines the occurrence of words in the document. The learning problem is, using only the observed words in the documents (and not the hidden topics), to estimate the topic probability vectors (i.e. discover the strength by which words tend to appear under different topics). In practice, a broad class of latent variable models is most often fit with either local search heuristics (such as the EM algorithm) or sampling based approaches.
This talk will discuss a general and (computationally and statistically) efficient parameter estimation method for a wide class of latent variable models---including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation---by exploiting a certain tensor structure in their low-order observable moments. Specifically, parameter estimation is reduced to the problem of extracting a certain decomposition of a tensor derived from the (typically second- and third-order) moments; this particular tensor decomposition can be viewed as a natural generalization of the singular value decomposition for matrices.
I am a senior research scientist at Microsoft Research, New England, a relatively new lab in Cambridge, MA. Previously, I was an associate professor at the Department of Statistics, Wharton, University of Pennsylvania (from 2010-2012), and I was an assistant professor at the Toyota Technological Institute at Chicago. Before this, I did a postdoc in the Computer and Information Science department at the University of Pennsylvania under the supervision of Michael Kearns. I completed my PhD at the Gatsby Unit where my advisor was Peter Dayan. Before Gatsby, I was an undergraduate at Caltech where I did my BS in physics.