Agenda

Data Science Seminar

Le 11 décembre 2019 à Télécom Paris / Barrault. Séminaire entièrement en anglais.
December 11, 2018

The seminar took place from 2PM to 3PM (room C48), and featured one talk:

Talk: Shai Ben-David (University of Waterloo): Settling the sample complexity of learning mixtures of Gaussians via sample compression schemes

You can download the slides of this talk.

Abstract: Estimating distributions from observed data is a fundamental
task in statistics that has been studied for more than a century. This task
also arises in applied machine learning. In many scenarios it is common to
assume that the distribution can be modelled using a mixture of Gaussians.

We prove that Θ(kd^2/ε^2 ) samples are necessary and sufficient for
learning a mixture of k Gaussians in R^d , up to error ε in total variation
distance. This improves both the known upper bounds and lower bounds for
this problem. For mixtures of axis-aligned Gaussians, we show that O(kd/ε^2
) samples suffice, matching a known lower bound. The upper bound is based
on a novel technique for distribution learning based on a notion of sample
compression. Any class of distributions that allows such a sample
compression scheme can also be learned with few samples. Moreover, if a
class of distributions has such a compression scheme, then so do the
classes of products and mixtures of those distributions. The core of our
main result is showing that the class of Gaussians in R^d has an efficient
sample compression.

The talk is based on joint work with Hassan Ashtiani, Nicholas J. A.
Harvey, Christopher Liaw, Abbas Mehrabian and Yaniv Plan.

This work received a best paper award at NeurIPS 2018.