Sparse coding provides a class of algorithms for finding succinct representations
of stimuli; given only unlabeled input data, it discovers basis functions that capture
higher-level features in the data. However, finding sparse codes remains a
very difficult computational problem. In this paper, we present efficient sparse
coding algorithms that are based on iteratively solving two convex optimization
problems: an L1-regularized least squares problem and an L2-constrained least
squares problem. We propose novel algorithms to solve both of these optimization
problems. Our algorithms result in a significant speedup for sparse coding,
allowing us to learn larger sparse codes than possible with previously described
algorithms. We apply these algorithms to natural images and demonstrate that the
inferred sparse codes exhibit end-stopping and non-classical receptive field surround
suppression and, therefore, may provide a partial explanation for these two
phenomena in V1 neurons.