A new system can accurately locate shooters based on video recordings from as few as three smartphones, researchers report.
When researchers demonstrated the system using three video recordings from the 2017 mass shooting in Las Vegas that left 58 people dead and hundreds wounded, the system correctly estimated the shooter’s actual location—the north wing of the Mandalay Bay hotel. The estimate was based on three gunshots fired within the first minute of what would be a prolonged massacre.
Alexander Hauptmann, a research professor in Carnegie Mellon University’s Language Technologies Institute, says the system, called Video Event Reconstruction and Analysis (VERA), won’t necessarily replace the commercial microphone arrays for locating shooters that public safety officials already use, although it may be a useful supplement for public safety when commercial arrays aren’t available.
One key motivation for assembling VERA was to create a tool that human rights workers and journalists who investigate war crimes, terrorist acts, and human rights violations could use, Hauptmann says.
“Military and intelligence agencies are already developing these types of technologies,” says Jay D. Aronson, a professor of history and director of the Center for Human Rights Science. “We think it’s crucial for the human rights community to have the same types of tools. It provides a necessary check on state power.”
Combining technologies to find shooters
Hauptmann says he used his expertise in video analysis to help investigators analyze events such as the 2014 Maidan massacre in Ukraine, which left at least 50 antigovernment protesters dead. Inspired by that work—and the insight of ballistics experts and architecture colleagues from the firm SITU Research—Hauptmann, Aronson, and Junwei Liang, a PhD student in language and information technology, have pulled together several technologies for processing video, while automating their use as much as possible.
VERA uses machine learning techniques to synchronize the video feeds and calculate the position of each camera based on what that camera is seeing. But it’s the audio from the video feeds that’s pivotal in localizing the source of the gunshots, Hauptmann says.
Specifically, the system looks at the time delay between the crack a supersonic bullet’s shock wave causes and the muzzle blast, which travels at the speed of sound. It also uses audio to identify the type of gun used, which determines bullet speed. VERA can then calculate the shooter’s distance from the smartphone.
“When we began, we didn’t think you could detect the crack with a smartphone because it’s really short,” Hauptmann says. “But it turns out today’s cell phone microphones are pretty good.”
By using video from three or more smartphones, VERA can calculate the direction from which the shots were fired—and the shooter’s location—based on the differences in how long it takes the muzzle blast to reach each camera.
Open source software
With the proliferation of mass protests occurring in places such as Hong Kong, Egypt, and Iraq, identifying where a shot originated can be critical to determining whether protesters, police, or other groups might be responsible when a shooting takes place, Aronson says.
But VERA is not limited to detecting gunshots or the location of shooters. It is an event analysis system that can be used to locate a variety of other sounds relevant to human rights and war crimes investigations, he says. He and Hauptmann hope that other groups will add functionalities to the open-source software.
“Once it’s open source, the journalism and human rights communities can build on it in ways we don’t have the imagination for or time to do,” Aronson adds.
The researchers presented VERA and released it as open-source code last month at the Association for Computing Machinery’s International Conference on Multimedia in Nice, France.
Support for the work came from the National Institute of Standards and Technology, the MacArthur Foundation, and the Oak Foundation also supported this work.
Source: Carnegie Mellon University