Researchers have decoded visual images from a dog’s brain.
The work offers a first look at how the canine mind reconstructs what it sees.
The results suggest that dogs are more attuned to actions in their environment rather than to who or what is doing the action.
The researchers recorded the fMRI neural data for two awake, unrestrained dogs as they watched videos in three 30-minute sessions, for a total of 90 minutes. They then used a machine-learning algorithm to analyze the patterns in the neural data.
“We showed that we can monitor the activity in a dog’s brain while it is watching a video and, to at least a limited degree, reconstruct what it is looking at,” says Gregory Berns, professor of psychology at Emory University and corresponding author of the paper. “The fact that we are able to do that is remarkable.”
The project was inspired by recent advancements in machine learning and fMRI to decode visual stimuli from the human brain, providing new insights into the nature of perception. Beyond humans, the technique has been applied to only a handful of other species, including some primates.
“While our work is based on just two dogs it offers proof of concept that these methods work on canines,” says Erin Phillips, first author of the paper, who did the work as a research specialist in Berns’ Canine Cognitive Neuroscience Lab. “I hope this paper helps pave the way for other researchers to apply these methods on dogs, as well as on other species, so we can get more data and bigger insights into how the minds of different animals work.”
Berns and colleagues pioneered training techniques for getting dogs to walk into an fMRI scanner and hold completely still and unrestrained while their neural activity is measured. A decade ago, his team published the first fMRI brain images of a fully awake, unrestrained dog. That opened the door to what Berns calls The Dog Project—a series of experiments exploring the mind of the oldest domesticated species.
Over the years, his lab has published research into how the canine brain processes vision, words, smells, and rewards such as receiving praise or food.
Meanwhile, the technology behind machine-learning computer algorithms kept improving. The technology has allowed scientists to decode some human brain-activity patterns. The technology “reads minds” by detecting within brain-data patterns the different objects or actions that an individual is seeing while watching a video.
“I began to wonder, ‘Can we apply similar techniques to dogs?'” Berns recalls.
The first challenge was to come up with video content that a dog might find interesting enough to watch for an extended period. The research team affixed a video recorder to a gimbal and selfie stick that allowed them to shoot steady footage from a dog’s perspective, at about waist high to a human or a little bit lower.
They used the device to create a half-hour video of scenes relating to the lives of most dogs. Activities included dogs being petted by people and receiving treats from people. Scenes with dogs also showed them sniffing, playing, eating, or walking on a leash. Activity scenes showed cars, bikes, or a scooter going by on a road; a cat walking in a house; a deer crossing a path; people sitting; people hugging or kissing; people offering a rubber bone or a ball to the camera; and people eating.
The video data was segmented by time stamps into various classifiers, including object-based classifiers (such as dog, car, human, cat) and action-based classifiers (such as sniffing, playing, or eating).
Only two of the dogs that had been trained for experiments in an fMRI had the focus and temperament to lie perfectly still and watch the 30-minute video without a break, including three sessions for a total of 90 minutes. These two “super star” canines were Daisy, a mixed breed who may be part Boston terrier, and Bhubo, a mixed breed who may be part boxer.
“They didn’t even need treats,” says Phillips, who monitored the animals during the fMRI sessions and watched their eyes tracking on the video. “It was amusing because it’s serious science, and a lot of time and effort went into it, but it came down to these dogs watching videos of other dogs and humans acting kind of silly.”
Two humans also underwent the same experiment, watching the same 30-minute video in three separate sessions, while lying in an fMRI.
The brain data could be mapped onto the video classifiers using time stamps.
A machine-learning algorithm, a neural net known as Ivis, was applied to the data. A neural net is a method of doing machine learning by having a computer analyze training examples. In this case, the neural net was trained to classify the brain-data content.
The results for the two human subjects found that the model developed using the neural net showed 99% accuracy in mapping the brain data onto both the object- and action-based classifiers.
In the case of decoding video content from the dogs, the model did not work for the object classifiers. It was 75% to 88% accurate, however, at decoding the action classifications for the dogs.
The results suggest major differences in how the brains of humans and dogs work.
“We humans are very object oriented,” Berns says. “There are 10 times as many nouns as there are verbs in the English language because we have a particular obsession with naming objects. Dogs appear to be less concerned with who or what they are seeing and more concerned with the action itself.”
Dogs and humans also have major differences in their visual systems, Berns notes. Dogs see only in shades of blue and yellow but have a slightly higher density of vision receptors designed to detect motion.
“It makes perfect sense that dogs’ brains are going to be highly attuned to actions first and foremost,” he says. “Animals have to be very concerned with things happening in their environment to avoid being eaten or to monitor animals they might want to hunt. Action and movement are paramount.”
For Philips, understanding how different animals perceive the world is important to her current field research into how predator reintroduction in Mozambique may impact ecosystems. “Historically, there hasn’t been much overlap in computer science and ecology,” she says. “But machine learning is a growing field that is starting to find broader applications, including in ecology.”
Additional authors of the paper include Daniel Dilks, Emory associate professor of psychology, and Kirsten Gillette, who worked on the project as an Emory undergraduate neuroscience and behavioral biology major. Gilette has since graduated and is now in a postbaccalaureate program at the University of North Carolina.
Daisy is owned by Rebecca Beasley and Bhubo is owned by Ashwin Sakhardande. The human experiments in the study were supported by a grant from the National Eye Institute.
The research appears in the Journal of Visualized Experiments.
Source: Emory University