Our most recent research on unsupervised representation learning, titled “Neural Expectation Maximization”, will be featured at the 5th International Conference on Learning Representations (ICLR) as a workshop paper.
In our paper we introduce Neural Expectation Maximization (N-EM): a novel framework for representation learning that combines generalized EM with neural networks and can be implemented as an end-to-end differentiable recurrent neural network. N-EM exploits statistical regularities in the data to produce multiple representations that each correspond to a particular conceptual entity. It simultaneously identifies subsets of the input that corresponds to conceptual entities and learns a corresponding distributed representation that efficiently captures this information.
We apply RNN-EM (a more powerful version of N-EM) to a perceptual grouping task in which the dataset consists of videos containing several objects that fly around in a fixed space. Each object shares its structure across the dataset of videos and we expect RNN-EM to learn to group the pixels belonging to each object separately and independently for each image.
In the process of grouping we learn a representation that efficiently captures each of the objects. RNN-EM shares weights across groups and each representation learned therefore shares the same semantics. These learned representations are symbol-like and useful for many upstream (supervised) learning tasks.
The poster is available online.
An extended version of this work has been submitted to NIPS 2017 and can be found here.
We recently published our new paper  titled “A wavelet-based Encoding for Neuroevolution” in the Proceedings of the 2016 Genetic and Evolutionary Computation Conference (GECCO).
In our paper we introduce a novel indirect encoding scheme that encodes neural network connection weights as low-frequency wavelet-domain coefficients. A lossy Discrete Inverse Wavelet Transform (IDWT) maps these coefficients back to the neural network phenotype. This Wavelet-based Encoding (WBE) builds on top of a Discrete Cosine Transform (DCT) encoding  and similarly satisfies continuity in the genotype-phenotype mapping. However, unlike the DCT encoding, the WBE satisfies a variable degree of gene-locality.
In our experiments we observe that the WBE yields superior performance on the Octopus-arm Control task (a relevant benchmark used previously for a Reinforcement Learning competition) compared to the DCT encoding. We argue that this is due to the added gene-locality of the WBE, which positively affects the efficiency of training neural networks by means of evolutionary search. A more general theoretically motivated intuition underlying our reasoning is presented in the paper.
Other novelties in our approach arise from the flexibility of the WBE. Being able to freely choose the wavelet basis function to compute the IDWT we were able to augment the WBE with a dynamic basis function, which can be optimised alongside the low-frequency wavelet coefficients by means of the same evolutionary algorithm. The resulting approach is able to optimise the mapping and coefficients simultaneously.
The conference presentation slides and code are available online.
 van Steenkiste, S., Koutník, J., Driessens, K. and Schmidhuber, J., 2016, July. A Wavelet-based Encoding for Neuroevolution. In Proceedings of the 2016 on Genetic and Evolutionary Computation Conference (pp. 517-524). ACM.
 Koutnik, J., Gomez, F. and Schmidhuber, J., 2010, July. Evolving neural networks in compressed weight space. In Proceedings of the 12th annual conference on Genetic and evolutionary computation (pp. 619-626). ACM.