Abstract
Despite growing interest in using sparse coding based
methods for image classification and retrieval, progress
in this direction has been limited by the high computational
cost for generating each image’s sparse representation.
To overcome this problem, we leverage sparsitybased
dictionary learning and hash-based feature selection
to build a novel unsupervised way to efficiently
pick out a query image’s most important high-level features;
the selected set of features effectively pinpoint
to which group of images we would visually perceived
the query as similar. Moreover, the method is adaptive
to the retrieval database presented at the moment. The
preliminary results based on L1 feature map show the
method’s efficiency and accuracy from the visual cognitive
perspective.