CARNEGIE MELLON (US) — New software can automatically detect the sometimes-subtle features—like street signs and balcony railings—that give Paris and other cities their distinctive looks.
The visual data mining software analyzed more than 250 million visual elements gleaned from 40,000 Google Street View images of Paris, London, New York, Barcelona, and eight other cities to find those that were both frequent and could be used to discriminate one city from the others.
This yielded sets of geo-informative visual elements unique to each city, such as cast-iron balconies in Paris, fire escapes in New York City, and bay windows in San Francisco.
The discovered visual elements can be useful for a variety of computational geography tasks. Examples include mapping architectural correspondences and influences within and across cities, or finding representative elements at different geo-spatial scales such as a continent, a city, or a specific neighborhood.
Researchers, from Carnegie Mellon University and INRIA/Ecole Normale Supérieure in Paris, presented their findings at SIGGRAPH 2012, the International Conference on Computer Graphics and Interactive Techniques, at the Los Angeles Convention Center.
Alexei Efros, associate professor of robotics and computer science at Carnegie Mellon, notes that although finding patterns in very large databases—so-called Big Data mining—is widely used, it has so far been limited to text or numerical data.
“Visual Data is much more difficult, so the field of visual data mining is still in its infancy, but I believe it holds a lot of promise. Our data mining technique was able to go through millions of image patches automatically—something that no human would be patient enough to do,” says Efros, who collaborated with colleagues including Abhinav Gupta, assistant research professor of robotics, and Carl Doersch, a Ph.D. student in the Machine Learning Department.
“In the long run, we wish to automatically build a digital visual atlas of not only architectural but also natural geo-informative features for the entire planet.”
For this study, the researchers started with 25,000 randomly selected visual elements from city images gathered from Google Street View. A machine learning program then analyzed these visual elements to determine which details made them different from similar visual elements in other cities.
After several iterations, the software identified the top-scoring patches for identifying a city. For Paris, those patches corresponded to doors, balconies, windows with railings, street signs (the shape and color of the signs, not the street names on the signs), and special Parisian lampposts.
It had more trouble identifying geo-informative elements for US cities, which the researches attribute to the relative lack of stylistic coherence in American cities with their melting pot of styles and influences.
“We let the data speak for itself,” says Gupta, noting the entire process is automated, yet produces a set of images that convey a better stylistic feel for a city than a set of random images.
Doersch says this process requires a significant amount of computing time, keeping 150 processors working overnight. By comparison, art directors for the 2007 Pixar movie Ratatouille spent a week running around Paris taking photos so they could capture the look and feel of Paris in their computer model of the city.
In addition to Efros, Gupta, and Doersch, the research team included Saurabh Singh, a former research assistant in the Robotics Institute, and Josef Sivic, a researcher at INRIA/Ecole Normale Supérieure.
The Department of Defense, Google, National Science Foundation, EIT-ICT, the Office of Naval Research, and MSR-INRIA funded the study.
More news from Carnegie Mellon University: www.cmu.edu/news/