Non-convex Optimization in Machine Learning: Provable Guarantees using Spectral Methods

May 2, 2016
noon - 1pm
Halligan 111A
Speaker: Majid Janzamin, University of California, Irvine
Host: Roni Khardon

Abstract

Optimization lies at the core of machine learning. However, most machine learning problems entail non-convex optimization. In this talk, I will show how spectral and tensor methods can yield guaranteed convergence to globally optimal solutions under transparent conditions for a range of machine learning problems.

In the first part, I will explain how tensor methods are useful for learning latent variable models in an unsupervised manner. The focus of my work is on overcomplete regime where the hidden dimension is larger than the observed dimensionality. I describe how tensor methods enable us to learn these models in the overcomplete regime with theoretical guarantees in recovering the parameters of the model. I also provide efficient sample complexity results for training these models. Next, I will describe a new method for training neural networks for which we provide theoretical guarantees on the performance of the algorithm. We have developed a computationally efficient algorithm for training a two-layer neural network using method-of-moment and tensor decomposition techniques.

Bio: Majid Janzamin is a PhD candidate in the Electrical Engineering and Computer Science Department at UC Irvine. He received his B.Sc. and M.Sc. in Electrical Engineering from Sharif University of Technology, Tehran, Iran in 2007 and 2010, respectively. He did several visits and internships in Microsoft research labs at New England and Silicon Valley. His research interests are in the areas of machine learning, data science, and statistics. In particular, he has worked on graphical models, latent variable models and neural networks. He has developed techniques in optimization (convex and non-convex), spectral and tensor methods.