Like toddlers, robots can use a little help as they learn to function in the physical world. A new program uses gentle physical feedback to guide machines toward the most helpful, human-like ways to best work side-by-side with people.
“Historically, the role of robots was to take over the mundane tasks we don’t want to do: manufacturing, assembly lines, welding, painting,” says Marcia O’Malley, a professor of mechanical engineering, electrical and computer engineering, and computer science at Rice University.
“I already talk to Alexa in the kitchen, so why not also have machines we can physically collaborate with?”
“As we become more willing to share personal information with technology, like the way my watch records how many steps I take, that technology moves into embodied hardware as well.
“Robots are already in our homes vacuuming or controlling our thermostats or mowing the lawn,” she says. “There are all sorts of ways technology permeates our lives. I already talk to Alexa in the kitchen, so why not also have machines we can physically collaborate with? A lot of our work is about making human-robot interactions safe.”
Robots adapted to respond to physical human-robot interaction (pHRI) traditionally treat such interactions as disturbances and resume their original behaviors when the interactions end. For the new study, researchers enhanced pHRI with a method that allows humans to physically adjust a robot’s trajectory in real time.
At the heart of the program is the concept of impedance control—literally, a way to manage what happens when push comes to shove. A robot that allows for impedance control through physical input adjusts its programmed trajectory to respond but returns to its initial trajectory when the input ends.
As reported in IEEE Explore, the new algorithm builds upon that concept as it allows the robot to adjust its path beyond the input and calculate a new route to its goal, something like a GPS system that recalculates the route to its destination when a driver misses a turn.
Dylan Losey, a graduate student who works with O’Malley, spent much of last summer in the lab of Anca Dragan, an assistant professor of electrical engineering and computer sciences at the University of California, Berkeley, testing the theory.
He and other students trained a robot arm and hand to deliver a coffee cup across a desktop, and then used enhanced pHRI to keep it away from a computer keyboard and low enough so that the cup wouldn’t break if dropped. (A separate paper on the experiments appears in the Proceedings of Machine Learning Research.)
The goal was to deform the robot’s programmed trajectory through physical interaction. “Here the robot has a plan, or desired trajectory, which describes how the robot thinks it should perform the task,” Losey writes in an essay about the Berkeley experiments. “We introduced a real-time algorithm that modified, or deformed, the robot’s future desired trajectory.”
Program makes robots better listeners
In impedance mode, the robot consistently returned to its original trajectory after an interaction. In learning mode, the feedback altered not only the robot’s state at the time of interaction but also how it proceeded to the goal, Losey says.
If the user directed it to keep the cup from passing over the keyboard, for instance, it would continue to do so in the future. “By our replanning the robot’s desired trajectory after each new observation, the robot was able to generate behavior that matches the human’s preference,” he says.
“The robot shouldn’t just try to get out of the way. It should learn what’s going on and do its job better.”
Further tests employed 10 students who used the O’Malley lab’s rehabilitative force-feedback robot, the OpenWrist, to manipulate a cursor around obstacles on a computer screen and land on a blue dot. The tests first used standard impedance control and then impedance control with physically interactive trajectory deformation, an analog of pHRI that allowed the students to train the device to learn new trajectories.
Trials with trajectory deformation were physically easier and required significantly less interaction to achieve the goal. The experiments demonstrated that interactions can program otherwise-autonomous robots that have several degrees of freedom, in this case flexing an arm and rotating a wrist.
One current limitation is that pHRI cannot yet modify the amount of time it takes a robot to perform a task, but that’s on the agenda.
“The paradigm shift in this work is that instead of treating a human as a random disturbance, the robot should treat the human as a rational being who has a reason to interact and is trying to convey something important,” Losey says. “The robot shouldn’t just try to get out of the way. It should learn what’s going on and do its job better.”
The National Science Foundation supported the work.
Source: Rice University