# PhD Defense: Nonparametric Bayesian Mixed-effects Models for Multi-task Learning

## Abstract

In many real world problems we are interested in learning multiple tasks while the training set for each task is quite small. When the different tasks are related, one can learn all tasks simultaneously and aim to get improved predictive performance by taking advantage of the common aspects of all tasks. This general idea is known as multi-task learning and it has been successfully investigated in several technical settings, with applications in many areas.

In this thesis we explore a Bayesian realization of this idea especially using Gaussian Processes (GP) where sharing the prior and its parameters among the tasks can be seen to implement multi- task learning. Our focus is on the functional mixed-effects model. More specifically, we propose a family of novel Nonparametric Bayesian models, Grouped mixed-effects GP models, where each individual task is given by a fixed-effect, taken from one of a set of unknown groups, plus a random individual effect function that captures variations among individuals. The proposed models provide a unified algorithmic framework to solve time series prediction, clustering and classification.

We propose the shift-invariant version of Grouped mixed-effects GP to cope with periodic time series that arise in astrophysics when using data for periodic variable stars. We develop an efficient EM algorithm to learn the parameters of the model, and as a special case we obtain the Gaussian mixture model and EM algorithm for phased-shifted periodic time series. Furthermore, we extend the proposed model by using a Dirichlet Process prior, thereby leading to an infinite mixture model. A Variational Bayesian approach is developed for inference in this model, leading to an efficient algorithm for model selection that automatically chooses an appropriate model order for the data.

We present the first sparse solution to learn the Grouped mixed- effects GP. We show that, given a desired model order, how the sparse approximation can be obtained by maximizing a variational lower bound on the marginal likelihood, generalizing ideas from single-task Gaussian processes to handle the mixed-effects model as well as grouping.

Finally, the thesis investigates the period estimation problem through the lens of machine learning. Using GP, we propose a novel method for period finding that does not make assumptions on the shape of the periodic function. The algorithm combines gradient optimization with grid search and incorporates several mechanisms to overcome the high computational complexity of GP. We also propose a novel approach for using domain knowledge, in the form of a probabilistic generative model, and incorporate such knowledge into the period estimation algorithm, yielding significant improvements in the accuracy of period identification.