Preliminary annotations were collected from local tourist
guides. Inorder to extend the utility of our mobile Heritage
App, we want to reach out to the scholastic community in
this domain of history and architecture. In this way, we seek
to present more precise and detailed information to the end
users.
We demonstrate a mobile vision app for scalable annotation
retrieval at tourist heritage sites. The annotation
retrieval pipeline is based on a robust BoW-based image retrieval
and matching and works on an underlying database
of annotated images. We see this as an initial step to build
large-scale image and annotation retrieval applications. Here,
we have experimented with a 5K-10K database of images
and tried to incorporate a scalable approach that can work
with a database of 100K or 1M images. We want to extend
our mobile app to successfully retrieve from a larger image
database with similar eciency. Although SIFT is a more
robust descriptor, extracting the SIFT descriptors is computationally
expensive on a mobile phone and consumes 35-
40% of the total time in retrieving annotations. Using faster
and compact descriptors could be an interesting alternative.
These problems are the future scope of our work.