Computer scientists have created a new system for mobile users to quickly determine their location indoors without communicating with the cloud, networks, or other devices.
The battery-saving scheme uses image recognition and “hashing,” a method that reduces key details in a photo to short strings of numbers called hashes.
To determine a location, the indoor mobile positioning system hashes a photo from the user’s camera and compares it against a pre-downloaded, highly compressed location database called a hash table.
Software makes 3D maps of buildings in real time
The system is called CaPSuLe, short for “Camera-based Positioning System Using Learning,”
In tests on a commercially available smartphone, CaPSuLe calculated locations in less than two seconds with greater than 92 percent accuracy using less than 4 joules of energy, says system co-inventor Anshumali Shrivastava, assistant professor of computer science at Rice University. “The core of our system is a hashing-based image-matching algorithm that is more than 500 times cheaper—both in terms of energy and computational overhead—than state-of-the-art image-matching techniques.”
3 problems for mobile app designers
Shrivastava says CaPSuLe is a proof-of-concept application that uses a combination of machine learning and inexact computing to address three of the primary problems facing mobile application designers.
“Privacy, computations, and energy are the big challenges,” he says. “Inexact computing helps with all three. In short, it allows us to determine answers with something less than 100 percent confidence. There are many situations where a miniscule loss of confidence, say 1 percent or less, works just as well as the golden solution. Yet that tiny difference in accuracy can give us exponential gains in computations and energy.
“Certainty, or confidence, is a resource that can be traded, and as always, the sweet spot is not the extreme.”
How to hide from the internet’s surveillance machine
For example, a traditional brute-force image-matching technique that Shrivastava and colleagues used for comparison with CaPSuLe consumed greater than 500 times the energy and took almost 17 minutes to complete a single location query when computations were performed on the mobile. For that extra energy and time, the accuracy improved to 93.4 percent—less than 2 percentage points better than the accuracy of CaPSuLe.
A ‘cloudless’ option
In describing how CaPSuLe might be used, Shrivastava cites the example of a shopping mall. The mall owner would need a gallery of images of the interior of the mall; CaPSuLe would scan those images for key features like store marques, escalators, benches, kiosks, etc. Rather than storing the images, the system stores a table of hashes, which serve as image fingerprints. These fingerprints are lightweight and can be computed very fast, Shrivastava says.
To test CaPSuLe, study coauthors from Korea’s Seoul National University made a CaPSuLe app that ran on a smartphone. Tests were conducted in a Seoul shopping mall, and the hash table was prepared using 871 reference photos.
“Cloud-based machine-learning applications are getting a great deal of attention, but cloud-based solutions have inherent privacy drawbacks, and they are typically computationally and energy-intensive,” Shrivastava says. “CaPSuLe shows that a ‘cloudless,’ probabilistic approach can be a viable and more sustainable alternative.”
This effort is a part of Rice University’s Center for Computing at the Margins (RUCCAM), which study coauthor Krishna Palem leads. CaPSuLe was presented in September at the Institute for Electrical and Electronics Engineers System-On-Chip Conference (IEEE SOCC) in Seattle.
Source: Rice University