Unsupervised Representation Learning with Autoencoders
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Despite the recent progress in machine learning and deep learning, unsupervised learning still remains a largely unsolved problem. It is widely recognized that unsupervised learning algorithms that can learn useful representations are needed for solving problems with limited label information. In this thesis, we study the problem of learning unsupervised representations using autoencoders, and propose regularization techniques that enable autoencoders to learn useful representations of data in unsupervised and semi-supervised settings. First, we exploit sparsity as a generic prior on the representations of autoencoders and propose sparse autoencoders that can learn sparse representations with very fast inference processes, making them well-suited to large problem sizes where conventional sparse coding algorithms cannot be applied. Next, we study autoencoders from a probabilistic perspective and propose generative autoencoders that use a generative adversarial network (GAN) to match the distribution of the latent code of the autoencoder with a pre-defined prior. We show that these generative autoencoders can learn posterior approximations that are more expressive than tractable densities often used in variational inference. We demonstrate the performance of developed methods of this thesis on real world image datasets and show their applications in generative modeling, clustering, semi-supervised classification and dimensionality reduction.
Description
Keywords
Citation
DOI
ISSN
Creative Commons
Creative Commons URI
Collections
Items in TSpace are protected by copyright, with all rights reserved, unless otherwise indicated.