Play Video

Robot learns to feed folks dinner

Researchers have developed a robotic system that can feed people who need someone to help them eat. Here, a volunteer demonstrates how the system works. (Credit: Eric Johnson/U. Washington)

A new robotic system can help make eating easier for people who need assistance, according to new research.

After identifying different foods on a plate, the robot can strategize how to use a fork to pick up and deliver the desired bite to a person’s mouth.

About 1 million adults in the United States need someone to help them eat, a time-consuming and often awkward task, one largely done out of necessity rather than choice.

“Being dependent on a caregiver to feed every bite every day takes away a person’s sense of independence,” says corresponding author Siddhartha Srinivasa, professor in the University of Washington’s Paul G. Allen School of Computer Science & Engineering. “Our goal with this project is to give people a bit more control over their lives.”

The robot adjusts how much force it uses to skewer a piece of food based on what kind of food it is. (Credit: Eric Johnson/U. Washington)

The idea was to develop an autonomous feeding system that would attach to people’s wheelchairs and feed them whatever they wanted to eat.

“When we started the project we realized: There are so many ways that people can eat a piece of food depending on its size, shape, or consistency. How do we start?” says coauthor Tapomayukh Bhattacharjee, a postdoctoral research associate in the Allen School. “So we set up an experiment to see how humans eat common foods like grapes and carrots.”

Carrots and bananas

The researchers arranged plates with about a dozen different kinds of food, ranging in consistency from hard carrots to soft bananas. The plates also included foods like tomatoes and grapes, which have a tough skin and soft insides. Then they gave volunteers a fork and asked them to pick up different pieces of food and feed them to a mannequin. The fork contained a sensor to measure how much force people used when they picked up food.

The volunteers used various strategies to pick up food with different consistencies. For example, people skewered soft items like bananas at an angle to keep them from slipping off the fork. For items like carrots and grapes, the volunteers tended to use wiggling motions to increase the force and spear each bite.

robotic feeding arm
While these experiments used a fork that contained a force sensor, the robot now uses a tactile force sensor to pick up a 3D printed fork. This is a gel-based sensor, so the robot measures force based on how much the gel is deformed. (Credit: U. Washington)

“People seemed to use different strategies not just based on the size and shape of the food but also how hard or soft it is. But do we actually need to do that?” Bhattacharjee says. “We decided to do an experiment with the robot where we had it skewer food until the fork reached a certain depth inside, regardless of the type of food.”

The robot used the same force-and-skewering strategy to try to pick up all the pieces of food, regardless of their consistency. It was able to pick up hard foods, but it struggled with soft foods and those with tough skins and soft insides. So robots, like humans, need to adjust how much force and angle they use to pick up different kinds of food.

Empowering caregivers

The team also notes that the acts of picking up a piece of food and feeding it to someone are not independent of each other. Volunteers often would specifically orient a piece of food on the fork to make it easy to eat.

“You can pick up a carrot stick by skewering it in the center of the stick, but it will be difficult for a person to eat,” Bhattacharjee says. “On the other hand, if you pick it up on one of the ends and then tilt the carrot toward someone’s mouth, it’s easier to take a bite.”

To design a skewering and feeding strategy that changes based on the food item, the researchers combined two different algorithms. First they used an object-detection algorithm called RetinaNet, which scans the plate, identifies the types of food on it, and places a frame around each item.

robotic feeding arm
The object-detection algorithm, called RetinaNet, scans the plate, identifies the types of food on it, and places a frame around each item. (Credit: Eric Johnson/U. Washington)

Then they developed SPNet, an algorithm that examines the type of food in a specific frame and tells the robot the best way to pick up the food. For example, SPNet tells the robot to skewer a strawberry or a slice of banana in the middle, and spear carrots at one of the two ends.

The team had the robot pick up pieces of food and feed them to volunteers using SPNet or a more uniform strategy: an approach that skewered the center of each food item regardless of what it was. SPNet’s varying strategies outperformed or performed the same as the uniform approach for all the food.

Eating independently

“Many engineering challenges are not picky about their solutions, but this research is very intimately connected with people,” Srinivasa says. “If we don’t take into account how easy it is for a person to take a bite, then people might not be able to use our system. There’s a universe of types of food out there, so our biggest challenge is to develop strategies that can deal with all of them.”

The team is currently working with the Taskar Center for Accessible Technology to get feedback from caregivers and patients in assisted living facilities on how to improve the system to match people’s needs.

“Ultimately our goal is for our robot to help people have their lunch or dinner on their own,” Srinivasa says. “But the point is not to replace caregivers: We want to empower them. With a robot to help, the caregiver can set up the plate, and then do something else while the person eats.”

The team published its results in a series of papers. One of the papers appears in IEEE Robotics and Automation Letters. The researchers will present the other paper at the ACM/IEEE International Conference on Human-Robot Interaction in South Korea. Additional coauthors are from the University of Washington and Technische Universität München in Germany.

The National Institutes of Health, the National Science Foundation, the Office of Naval Research, the Robotics Collaborative Technology Alliance, Amazon, and Honda funded the work.

Source: University of Washington