Archives

  • Drug turns human blood into poison for mosquitoes
  • Play Video

    Watch: Robot leaps and lands like a squirrel

    (Credit: Getty Images)

    Based on studies of leaping squirrels, researchers have designed a robot that can stick a landing on a branch.

    Engineers have designed robots that crawl, swim, fly, and even slither like a snake, but no robot can hold a candle to a squirrel, which can parkour through a thicket of branches, leap across perilous gaps, and execute pinpoint landings on the flimsiest of branches.

    University of California, Berkeley, biologists and engineers are trying to remedy that situation.

    Their new work, reported in the journal Science Robotics, is a big step in the design of more agile robots, ones that can leap among the trusses and girders of buildings under construction or robots that can monitor the environment in tangled forests or tree canopies.

    Next level robots

    “The robots we have now are OK, but how do you take it to the next level? How do you get robots to navigate a challenging environment in a disaster where you have pipes and beams and wires? Squirrels could do that, no problem. Robots can’t do that,” says Robert Full, one of paper’s senior authors and a professor of integrative biology at UC Berkeley.

    “Squirrels are nature’s best athletes,” Full adds. “The way that they can maneuver and escape is unbelievable. The idea is to try to define the control strategies that give the animals a wide range of behavioral options to perform extraordinary feats and use that information to build more agile robots.”

    Justin Yim, a former UC Berkeley graduate student and co-first author of the paper, translated what Full and his biology students discovered in squirrels to Salto, a one-legged robot developed at UC Berkeley in 2016 that could already hop and parkour and stick a landing, but only on flat ground. The challenge was to stick the landing while hitting a specific point—a narrow rod.

    “If you think about trying to jump to a point—maybe you’re doing something like playing hopscotch and you want to land your feet in a certain spot—you want to stick that landing and not take a step,” explains Yim, now an assistant professor of mechanical science and engineering at the University of Illinois, Urbana Champaign (UIUC).

    “If you feel like you’re going to fall over forward, then you might pinwheel your arms, but you’ll also probably stand up straight in order to keep yourself from falling over. If it feels like you’re falling backward and you might have to sit down because you’re not going to be able to quite make it, you might pinwheel your arms backward, but you’re likely also to crouch down as you do this. That is the same behavior that we programmed into the robot. If it’s going to be swinging under, it should crouch. If it’s going to swing over, it should extend out and stand tall.”

    Using these strategies, Yim is embarking on a NASA-funded project to design a small, one-legged robot that could explore Enceladus, a moon of Saturn, where the gravity is one-eightieth that of Earth, and a single hop could carry the robot the length of a football field.

    Enter Salto

    The new robot design is based on a biomechanical analysis of squirrel landings detailed in a paper accepted for publication in the Journal of Experimental Biology. Full is senior author and former graduate student Sebastian Lee is first author of that paper.

    Salto, short for Saltatorial Agile Locomotion on Terrain Obstacles, originated a decade ago in the lab of Ronald Fearing, now a professor in the Graduate School in UC Berkeley’s electrical engineering and computer sciences department (EECS). Much of its hopping, parkouring, and landing ability is a result of a long-standing interdisciplinary collaboration between biology students in Full’s Polypedal Lab and engineering students in Fearing’s Biomimetic Millisystems Lab.

    During the five years Yim was a UC Berkeley graduate student—he got his PhD in EECS in 2020, with Fearing as his adviser—he met with Full’s group every other week to learn from their biology experiments. Yim was trying to leverage Salto’s ability to land upright on a flat spot, even outdoors, to get it to hit a specific target, like a branch. Salto already had a motorized flywheel, or reaction wheel, to help it balance, much the way humans wheel their arms to restore balance. But that wasn’t sufficient for it to stick a direct landing on a precarious perch. He decided to try reversing the motors that launch Salto and use them to brake when landing.

    Suspecting that squirrels did the same with their legs when landing, the biology and robotics teams worked in parallel to confirm this and show that it would help Salto stick a landing. Full’s team instrumented a branch with sensors that measured the force perpendicular to the branch when a squirrel landed and the torque or turning force with respect to the branch that the squirrel applied with its feet.

    The research team found, based on high-speed video and sensor measurements, that when squirrels land after a heroic leap, they basically do a handstand on the branch, directing the force of landing through their shoulder joint so as to stress the joint as little as possible. Using pads on their feet, they then grasp the branch and twist to overcome whatever excess torque threatens to send them over or under the branch.

    “Almost all of the energy—86% of the kinetic energy—was absorbed by the front legs,” he says. “They’re really doing front handstands onto the branch, and then the rest of it follows. Then their feet generate a pull-up torque, if they’re going under; if they are going to go over the top—they’re overshooting, potentially—they generate a braking torque.”

    Perhaps more important to balancing, however, they found that squirrels also adjust the braking force applied to the branch when landing to compensate for over- or undershooting.

    “If you’re going to undershoot, what you can do is generate less leg-breaking force; your leg will collapse some, and then your inertia is going to be less, and that will swing you back up to correct,” Full says.

    “Whereas if you are overshooting, you want to do the opposite—you want to have your legs generate more breaking force so that you have a bigger inertia and it slows you down so that you can have a balanced landing.”

    Yim and UC Berkeley undergraduate Eric Wang redesigned Salto to incorporate adjustable leg forces, supplementing the torque of the reaction wheel. With these modifications, Salto was able to jump onto a branch and balance a handful of times, despite the fact that it had no ability to grip with its feet, Yim says.

    “We decided to take the most difficult path and give the robot no ability to apply any torque on the branch with its feet. We specifically designed a passive gripper that even had very low friction to minimize that torque,” Yim says.

    “In future work, I think it would be interesting to explore other more capable grippers that could drastically expand the robot’s ability to control the torque it applies to the branch and expand its ability to land. Maybe not just on branches, but on complex flat ground, too.”

    One-legged leaper

    In parallel, Full is now investigating the importance of the torque applied by the squirrel’s foot upon landing. Unlike monkeys, squirrels do not have a usable thumb that allows a prehensile grasp, so they must palm a branch, he says. But that may be an advantage.

    “If you’re a squirrel being chased by a predator, like a hawk or another squirrel, you want to have a sufficiently stable grasp, where you can parkour off a branch quickly, but not too firm a grasp,” he says. “They don’t have to worry about letting go, they just bounce off.”

    One-legged robots may sound impractical, given the potential for falling over when standing still. But Yim says that for jumping really high, one leg is the way to go.

    “One leg is the best number for jumping; you can put the most power into that one leg if you don’t distribute that power among multiple different devices. And the drawbacks you get from having only one leg lessen as you jump higher,” Yim says.

    “When you jump many, many times the height of your legs, there’s only one gait, and that is the gait in which every leg touches the ground at the same time and every leg leaves the ground at approximately the same time. So at that point, having multiple legs is kind of like having one leg. You might as well just use the one.”

    Funding for the research came from the US Army Research Office and the National Institutes of Health.

    Source: UC Berkeley

    Play Video

    Watch: New approach gets robot to clear the table

    (Credit: Murtaza Dalal/Carnegie Mellon)

    A new approach enables robots to manipulate new objects in a variety of environments.

    Clearing the dinner table is a task easy enough for a child to master, but it’s a major challenge for robots.

    Robots are great at doing repetitive tasks but struggle when they must do something new or interact with the disorder and mess of the real world. Such tasks become especially challenging when they have many steps.

    “You don’t want to reprogram the robot for every new task,” says Murtaza Dalal, a PhD student in the School of Computer Science’s (SCS) Robotics Institute at Carnegie Mellon University. “You want to just tell the robot what to do, and it does it. That’s necessary if we want robots to be useful in our daily lives.”

    To enable robots to undertake a wide variety of tasks they haven’t previously encountered, Dalal and other researchers at SCS and Apple Inc. have developed an approach to robotic manipulation called ManipGen that has proven highly successful for these multistep tasks, known as long-horizon tasks.

    The key idea, Dalal explains, is to divide the task of planning how a robotic arm needs to move into two parts.

    Imagine opening a door: The first step is to reach the door handle, next is to turn it. To solve the first problem, the researchers use well-established data-driven methods for computer vision and motion planning to locate the object and move a robotic arm’s manipulator near the object. This method simplifies the second part of the process, limiting it to interacting with the nearby object. In this case, the door handle.

    “At that point, the robot no longer cares where the object is. The robot only cares about how to grasp it,” Dalal says.

    Robots are typically trained to perform a task by using massive amounts of data derived from demonstrations of the task. That data can be manually collected, with humans controlling the robot, but the process is expensive and time consuming. An alternative method is to use simulation to rapidly generate data. In this case, the simulation would place the robot in a variety of virtual scenes, enabling it to learn how to grasp objects of various shapes and sizes, or to open and shut drawers or doors.

    Dalal says the research team used this simulation method to generate data and train neural networks to learn how to pick up and place thousands of objects and open and close thousands of drawers and doors, employing trial-and-error reinforcement learning techniques. The team developed specific training and hardware solutions for transferring these networks trained in simulation to the real world. They found that these skills could be recombined as necessary to enable the robot to interact with many different objects in the real world, including those it hadn’t previously encountered.

    “We don’t need to collect any new data,” Dalal says of deploying the robot in the real world. “We just tell the robot what to do in English and it does it.”

    The team implements the two-part process by using foundation models such as GPT-4o that can look at the robot’s environment and decompose the task—like cleaning up the table—into a sequence of skills for the robot to execute. Then the robot executes those skills, first estimating positions near objects using computer vision, then going there using motion planning, and finally manipulating the object using a depth camera to measure distances.

    The researchers have applied their method to challenging multistage tasks such as opening drawers and placing objects in them or rearranging objects on a shelf. They have demonstrated that this approach works with robotics tasks that involve up to eight steps “but I think we could go even further,” Dalal says.

    Likewise, gathering data through demonstrations could enable this approach to be extended to objects that can’t currently be simulated, such as soft and flexible objects.

    “There’s so much more to explore with ManipGen. The foundation we’ve built through this project opens up exciting possibilities for future advancements in robotic manipulation and brings us closer to the goal of developing generalist robots,” says Min Liu, a master’s student in the machine learning department and co-lead on the project.

    “ManipGen really demonstrates the strength of simulation-to-reality transfer as a paradigm for producing robots that can generalize broadly, something we have seen in locomotion, but until now, not for general manipulation,” says Deepak Pathak, an assistant professor of computer science in the Robotics Institute.

    ManipGen builds on research to enable robots to solve longer and more complicated tasks, says Ruslan Salakhutdinov, the principal investigator on the project and professor of computer science in the machine learning department.

    “In this iteration,” he says, “we finally show the exciting culmination of years of work: an agent that can generalize and solve an enormous array of tasks in the real world.”

    Dalal and Liu outline ManipGen in a newly released research paper.

    Source: Carnegie Mellon University