Abstract
Rotation in closed contour recognition is a puzzling nuisance
in most algorithms. In this paper we address three
fundamental issues brought by rotation in shapes: 1) is
alignment among shapes necessary? If the answer is “no”,
2) how to exploit information in different rotations? and
3) how to use rotation unaware local features for rotation
aware shape recognition?
We argue that the origin of these issues is the use
of hand crafted rotation-unfriendly features and measurements.
Therefore our goal is to learn a set of hierarchical
features that describe all rotated versions of a shape as
a class, with the capability of distinguishing different such
classes. We propose to rotate shapes as many times as possible
as training samples, and learn the hierarchical feature
representation by effectively adopting a convolutional neural
network. We further show that our method is very effi-
cient because the network responses of all possible shifted
versions of the same shape can be computed effectively by
re-using information in the overlapping areas. We tested the
algorithm on three real datasets: Swedish Leaves dataset,
ETH-80 Shape, and a subset of the recently collected Leafsnap
dataset. Our approach used the curvature scale space
and outperformed the state of the art.