## Neural Expectation Maximization @ ICLR’17 Workshop

Our most recent research on unsupervised representation learning, titled “Neural Expectation Maximization”, will be featured at the 5th International Conference on Learning Representations (ICLR) as a workshop paper.

In our paper we introduce *Neural Expectation Maximization (N-EM)*: a novel framework for representation learning that combines *generalized EM *with *neural networks* and can be implemented as an end-to-end differentiable recurrent neural network. N-EM exploits statistical regularities in the data to produce multiple representations that each correspond to a particular conceptual entity. It simultaneously identifies subsets of the input that corresponds to conceptual entities and learns a corresponding distributed representation that efficiently captures this information.

We apply RNN-EM (a more powerful version of N-EM) to a perceptual grouping task in which the dataset consists of videos containing several objects that fly around in a fixed space. Each object shares its structure across the dataset of videos and we expect RNN-EM to learn to group the pixels belonging to each object separately and independently for each image.

In the process of grouping we learn a representation that efficiently captures each of the objects. RNN-EM shares weights across groups and each representation learned therefore shares the same semantics. These learned representations are symbol-like and useful for many upstream (supervised) learning tasks.

The poster is available online.

UPDATE:

An extended version of this work has been submitted to NIPS 2017 and can be found here.