A.I. camera could help self-driving cars ‘see’ better

(Credit: Getty Images)

Researchers have devised a new type of artificially intelligent camera system that can classify images faster and more energy-efficiently.

The image recognition technology that underlies today’s autonomous cars and aerial drones depends on artificial intelligence: the computers essentially teach themselves to recognize objects like a dog, a pedestrian crossing the street, or a stopped car. The new camera could one day be small enough to fit in future electronic devices, something that is not possible today because of the size and slow speed of computers that can run artificial intelligence algorithms.

“That autonomous car you just passed has a relatively huge, relatively slow, energy intensive computer in its trunk,” says Gordon Wetzstein, an assistant professor of electrical engineering at Stanford University who led the research. Future applications will need something much faster and smaller to process the stream of images, he says.

Outsourcing the heavy lifting

Wetzstein and Julie Chang, a graduate student and first author of the paper, took a step toward that technology by marrying two types of computers into one, creating a hybrid optical-electrical computer designed specifically for image analysis.

The first layer of the prototype camera is a type of optical computer, which does not require the power-intensive mathematics of digital computing. The second layer is a traditional digital electronic computer.

“Millions of calculations are circumvented and it all happens at the speed of light…”

The optical computer layer operates by physically preprocessing image data, filtering it in multiple ways that an electronic computer would otherwise have to do mathematically. Since the filtering happens naturally as light passes through the custom optics, this layer operates with zero input power. This saves the hybrid system a lot of time and energy that would otherwise be consumed by computation.

“We’ve outsourced some of the math of artificial intelligence into the optics,” Chang says.

The result is profoundly fewer calculations, fewer calls to memory, and far less time to complete the process. Having leapfrogged these preprocessing steps, the remaining analysis proceeds to the digital computer layer with a considerable head start.

“Millions of calculations are circumvented and it all happens at the speed of light,” Wetzstein says.

Quick thinking

In speed and accuracy, the prototype rivals existing electronic-only computing processors that are programmed to perform the same calculations, but with substantial computational cost savings.

While their current prototype, arranged on a lab bench, isn’t exactly small, the researchers say their system can one day shrink to fit in a handheld video camera or an aerial drone.

4D camera gives robots a wider view

In both simulations and real-world experiments, the team used the system to successfully identify airplanes, automobiles, cats, dogs, and more within natural image settings.

“Some future version of our system would be especially useful in rapid decision-making applications, like autonomous vehicles,” Wetzstein says.

In addition to shrinking the prototype, Wetzstein, Chang, and colleagues are now looking at ways to make the optical component do even more of the preprocessing. Eventually, their smaller, faster technology could replace the trunk-size computers that now help cars, drones, and other technologies learn to recognize the world around them.

Ultra-thin camera design doesn’t need a lens

The research appears in Nature Scientific Reports. Additional coauthors are from Stanford and King Abdullah University of Science and Technology in Saudi Arabia.

The National Science Foundation, a Stanford Graduate Fellowship, a Sloan Research Fellowship and the KAUST Office of Sponsored Research funded the work.

Source: Stanford University