Researchers have created an algorithm that could work alongside an extremely sensitive laser technology that reflects off nearby objects to help self-driving cars see around corners.
Imagine that a driverless car is making its way through a winding neighborhood street, about to make a sharp turn onto a road where a child’s ball is rolling across the street. Although no person in the car can see that ball, the car stops to avoid it.
This scenario is one of many that researchers can envision for a system that can produce images of objects hidden from view. They’re focused on applications for autonomous vehicles, some of which already have similar laser-based systems for detecting objects around the car, but other uses could include seeing through foliage from aerial vehicles or giving rescue teams the ability to find people blocked from view by walls and rubble.
“It sounds like magic but the idea of non-line-of-sight imaging is actually feasible.”
“It sounds like magic but the idea of non-line-of-sight imaging is actually feasible,” says Gordon Wetzstein, assistant professor of electrical engineering at Stanford University and senior author of a paper outlining the work, which appears in Nature.
Wetzstein’s team isn’t alone in developing methods for bouncing lasers around corners to capture images of objects, but the researchers’ extremely efficient and effective algorithm developed to process the final image may advance the field.
“A substantial challenge in non-line-of-sight imaging is figuring out an efficient way to recover the 3D structure of the hidden object from the noisy measurements,” says David Lindell, graduate student in the Stanford Computational Imaging Lab and coauthor of the paper. “I think the big impact of this method is how computationally efficient it is.”
The researchers set a laser next to a highly sensitive photon detector, which can record even a single particle of light. They shoot pulses of laser light at a wall and, invisible to the human eye, those pulses bounce off objects around the corner and bounce back to the wall and to the detector. Currently, this scan can take from two minutes to an hour, depending on conditions such as lighting and the reflectivity of the hidden object.
Once the scan is finished, the algorithm untangles the paths of the captured photons and, like the mythical image enhancement technology of television crime shows, the blurry blob takes much sharper form. It does all this in less than a second and is so efficient it can run on a regular laptop. Based on how well the algorithm currently works, the researchers think they could speed it up so that it is nearly instantaneous once the scan is complete.
The team is continuing to work on this system, so it can better handle the variability of the real world and complete the scan more quickly. For example, the distance to the object and amount of ambient light can make it difficult for their technology to see the light particles it needs to resolve out-of-sight objects. This technique also depends on analyzing scattered light particles that are intentionally ignored by guidance systems currently in cars—known as LIDAR systems.
“We believe the computation algorithm is already ready for LIDAR systems,” says Matthew O’Toole, a postdoctoral scholar in the Stanford Computational Imaging Lab and co-lead author of the paper. “The key question is if the current hardware of LIDAR systems supports this type of imaging.”
Before the system is road ready, it will also have to work better in daylight and with objects in motion, like a bouncing ball or running child. The researchers did test their technique successfully outside but they worked only with indirect light.
The technology did perform particularly well picking out retroreflective objects, such as safety apparel or traffic signs. If you placed the technology on a car today, that car could easily detect things like road signs, safety vests, or road markers, although it might struggle with a person wearing non-reflective clothing.
“This is a big step forward for our field that will hopefully benefit all of us,” Wetzstein says. “In the future, we want to make it even more practical in the ‘wild.'”
The government of Canada, Stanford University’s Office of the Vice Provost for Graduate Education, the National Science Foundation, a Stanford University Terman Faculty Fellowship, and the King Abdullah University of Science and Technology funded the work.
Source: Stanford University