Browsing by Author "Makhzani, Alireza"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Compressed Sensing for Jointly Sparse Signals(2012-11-22) Makhzani, Alireza ; Valaee, Shahrokh ; Electrical and Computer EngineeringCompressed sensing is an emerging field, which proposes that a small collection of linear projections of a sparse signal contains enough information for perfect reconstruction of the signal. In this thesis, we study the general problem of modeling and reconstructing spatially or temporally correlated sparse signals in a distributed scenario. The correlation among signals provides an additional information, which could be captured by joint sparsity models. After modeling the correlation, we propose two different reconstruction algorithms that are able to successfully exploit this additional information. The first algorithm is a very fast greedy algorithm, which is suitable for large scale problems and can exploit spatial correlation. The second algorithm is based on a thresholding algorithm and can exploit both the temporal and spatial correlation. We also generalize the standard joint sparsity model and propose a new model for capturing the correlation in the sensor networks.Item Unsupervised Representation Learning with Autoencoders(2018-06) Makhzani, Alireza; Frey, Brendan; Electrical and Computer EngineeringDespite the recent progress in machine learning and deep learning, unsupervised learning still remains a largely unsolved problem. It is widely recognized that unsupervised learning algorithms that can learn useful representations are needed for solving problems with limited label information. In this thesis, we study the problem of learning unsupervised representations using autoencoders, and propose regularization techniques that enable autoencoders to learn useful representations of data in unsupervised and semi-supervised settings. First, we exploit sparsity as a generic prior on the representations of autoencoders and propose sparse autoencoders that can learn sparse representations with very fast inference processes, making them well-suited to large problem sizes where conventional sparse coding algorithms cannot be applied. Next, we study autoencoders from a probabilistic perspective and propose generative autoencoders that use a generative adversarial network (GAN) to match the distribution of the latent code of the autoencoder with a pre-defined prior. We show that these generative autoencoders can learn posterior approximations that are more expressive than tractable densities often used in variational inference. We demonstrate the performance of developed methods of this thesis on real world image datasets and show their applications in generative modeling, clustering, semi-supervised classification and dimensionality reduction.