Archives

Play Video

Watch: Robot takes inspiration from water bug ‘fans’

(Credit: Georgia Tech)

A new study explains how tiny water bugs use fan-like propellers to zip across streams at speeds up to 120 body lengths per second.

The researchers then created a similar fan structure and used it to propel and maneuver an insect-sized robot.

The discovery offers new possibilities for designing small machines that could operate during floods or other challenging situations.

“Scientists thought the bugs used their muscles to control the fans, so we were surprised to learn that surface tension actually powers them,” says Saad Bhamla, one of the study’s authors and associate professor in Georgia Tech’s School of Chemical and Biomolecular Engineering.

Instead of relying on their muscles, the insects about the size of a grain of rice use the water’s surface tension and elastic forces to morph the ribbon-shaped fans on the end of their legs to slice the water surface and change directions.

A gif of a fan-like propeller spreading out while in water.
The fan-like propeller. (Credit: Victor Ortega-Jimenez)

Once they understood the mechanism, the team built a self-deployable, one-milligram fan and installed it into an insect-sized robot capable of accelerating, braking, and maneuvering right and left.

The study appears in the journal Science.

Because contact with water triggers a mechanical response (opening the bug’s fans), the researchers suggested that the findings open the door to designing more energy-efficient and adaptive microrobots for use in rivers, wetlands, or flooded urban areas.

The research team, which included the University of California, Berkeley, and South Korea’s Ajou University, studied the millimeter-sized Rhagovelia. The water bug glides across fast-moving streams thanks to their fan-like propellers. The team found that the structures passively open and close 10 times faster than the blink of an eye.

The structures allow the bugs to execute sharp turns in just 50 milliseconds, rivaling the rapid aerial maneuvers of flies. In addition, the insects can produce wakes on the surface of the water that resemble the vortexes produced by flying wings.

Victor Ortega-Jimenez, a former Georgia Tech research scientist and the study’s lead author, first saw the ripple bugs during the pandemic while working at Kennesaw State University.

“These tiny insects were skimming and turning so rapidly across the surface of turbulent streams that they resembled flying insects,” says Ortega-Jimenez, assistant professor in Berkeley’s integrative biology department.

“How do they do it? That question stayed with me and took more than five years of incredible collaborative work to answer it.”

The next step was creating a robot inspired by the water striders. Ajou University Postdoctoral Researcher Dongjin Kim and Professor Je-Sung Koh solved a mystery of the fan’s design when they captured high-resolution images using a scanning electron microscope.

A small robot that looks like a water bug skims across the surface of water.
The robotic insect inspired by the Rhagovelia. (Credit: Ajou University)

“Our robotic fans self-morph using nothing but water surface forces and flexible geometry, just like their biological counterparts. It’s a form of mechanical embedded intelligence refined by nature through millions of years of evolution,” says Koh, a senior author of the study.

“In small-scale robotics, these kinds of efficient and unique mechanisms would be a key enabling technology for overcoming limits in miniaturization of conventional robots.”

For example, the researchers say the findings lay the foundation for future design of compact, semi-aquatic robots that can explore water surfaces in challenging, fast-flowing environments.

Support for this research came from the National Science Foundation and the National Institutes of Health. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of any funding agency.

Source: Georgia Tech

www.archa-hs.cz – hrajte nejlepší české online sloty zdarma i za peníze. Objevte top kasina, bonusy a začněte hrát ještě dnes!

Play Video

Watch: Legless robot can jump 10 feet high

The nematode-inspired soft robots are made of silicone rods with carbon-fiber spines. (Credit: Candler Hobbs/Georgia Tech)

Inspired by the movements of a tiny parasitic worm, engineers have created a 5-inch soft robot that can jump as high as a basketball hoop.

Their device, a silicone rod with a carbon-fiber spine, can leap 10 feet high even though it doesn’t have legs. The researchers made it after watching high-speed video of nematodes pinching themselves into odd shapes to fling themselves forward and backward.

The research appears in Science Robotics.

The researchers say their findings could help develop robots capable of jumping across various terrain, at different heights, in multiple directions.

“Nematodes are amazing creatures with bodies thinner than a human hair,” says Sunny Kumar, lead coauthor of the paper and a postdoctoral researcher in the School of Chemical and Biomolecular Engineering (ChBE) at Georgia Tech.

“They don’t have legs but can jump up to 20 times their body length. That’s like me laying down and somehow leaping onto a three-story building.”

Nematodes, also known as round worms, are among the most abundant creatures on Earth. They live in the environment and within humans, other vertebrates, and plants. They can cause illnesses in their host, which sometimes can be beneficial. For instance, farmers and gardeners use nematodes instead of pesticides to kill invasive insects and protect plants.

One way they latch onto their host before entering their bodies is by jumping. Using high-speed cameras, Victor Ortega-Jimenez—a lead author and former Georgia Tech research scientist who’s now a faculty member at the University of California, Berkeley—watched the creatures bend their bodies into different shapes based on where they wanted to go.

“It took me over a year to develop a reliable method to consistently make these tiny worms leap from a piece of paper and film them for the first time in great detail” Ortega-Jimenez says.

To hop backward, nematodes point their head up while tightening the midpoint of their body to create a kink. The shape is similar to a person in a squat position. From there, the worm uses stored energy in its contorted shape to propel backward, end over end, just like a gymnast doing a backflip.

To jump forward, the worm points its head straight and creates a kink on the opposite end of its body, pointed high in the air. The stance is similar to someone preparing for a standing broad jump. But instead of hopping straight, the worm catapults upward.

“Changing their center of mass allows these creatures to control which way they jump. We’re not aware of any other organism at this tiny scale that can effectively leap in both directions at the same height,” Kumar says.

And they do it despite nearly tying their bodies into a knot.

“Kinks are typically dealbreakers,” says Ishant Tiwari, a ChBE postdoctoral fellow and lead coauthor of the study. “Kinked blood vessels can lead to strokes. Kinked straws are worthless. Kinked hoses cut off water. But a kinked nematode stores energy that is used to propel itself in the air.”

After watching their videos, the team created simulations of the jumping nematodes. Then they built soft robots to replicate the leaping worms’ behavior, later reinforcing them with carbon fibers to accelerate the jumps.

Kumar and Tiwari work in Associate Professor Saad Bhamla’s lab. They collaborated on the project with Ortega-Jimenez and researchers at the University of California, Riverside.

The group found that the kinks allow nematodes to store more energy with each jump. They rapidly release it—in a tenth of a millisecond—to leap, and they’re tough enough to repeat the process multiple times.

The study suggests that engineers could create simple elastic systems made of carbon fiber or other materials that could withstand and exploit kinks to hop across various terrain.

“A jumping robot was recently launched to the moon, and other leaping robots are being created to help with search and rescue missions, where they have to traverse unpredictable terrain and obstacles,” Kumar says.

“Our lab continues to find interesting ways that creatures use their unique bodies to do interesting things, then build robots to mimic them.”

Support for the work came from the National Institutes of Health and the National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of any funding agency.

Source: Georgia Tech

Play Video

Watch: New approach gets robot to clear the table

(Credit: Murtaza Dalal/Carnegie Mellon)

A new approach enables robots to manipulate new objects in a variety of environments.

Clearing the dinner table is a task easy enough for a child to master, but it’s a major challenge for robots.

Robots are great at doing repetitive tasks but struggle when they must do something new or interact with the disorder and mess of the real world. Such tasks become especially challenging when they have many steps.

“You don’t want to reprogram the robot for every new task,” says Murtaza Dalal, a PhD student in the School of Computer Science’s (SCS) Robotics Institute at Carnegie Mellon University. “You want to just tell the robot what to do, and it does it. That’s necessary if we want robots to be useful in our daily lives.”

To enable robots to undertake a wide variety of tasks they haven’t previously encountered, Dalal and other researchers at SCS and Apple Inc. have developed an approach to robotic manipulation called ManipGen that has proven highly successful for these multistep tasks, known as long-horizon tasks.

The key idea, Dalal explains, is to divide the task of planning how a robotic arm needs to move into two parts.

Imagine opening a door: The first step is to reach the door handle, next is to turn it. To solve the first problem, the researchers use well-established data-driven methods for computer vision and motion planning to locate the object and move a robotic arm’s manipulator near the object. This method simplifies the second part of the process, limiting it to interacting with the nearby object. In this case, the door handle.

“At that point, the robot no longer cares where the object is. The robot only cares about how to grasp it,” Dalal says.

Robots are typically trained to perform a task by using massive amounts of data derived from demonstrations of the task. That data can be manually collected, with humans controlling the robot, but the process is expensive and time consuming. An alternative method is to use simulation to rapidly generate data. In this case, the simulation would place the robot in a variety of virtual scenes, enabling it to learn how to grasp objects of various shapes and sizes, or to open and shut drawers or doors.

Dalal says the research team used this simulation method to generate data and train neural networks to learn how to pick up and place thousands of objects and open and close thousands of drawers and doors, employing trial-and-error reinforcement learning techniques. The team developed specific training and hardware solutions for transferring these networks trained in simulation to the real world. They found that these skills could be recombined as necessary to enable the robot to interact with many different objects in the real world, including those it hadn’t previously encountered.

“We don’t need to collect any new data,” Dalal says of deploying the robot in the real world. “We just tell the robot what to do in English and it does it.”

The team implements the two-part process by using foundation models such as GPT-4o that can look at the robot’s environment and decompose the task—like cleaning up the table—into a sequence of skills for the robot to execute. Then the robot executes those skills, first estimating positions near objects using computer vision, then going there using motion planning, and finally manipulating the object using a depth camera to measure distances.

The researchers have applied their method to challenging multistage tasks such as opening drawers and placing objects in them or rearranging objects on a shelf. They have demonstrated that this approach works with robotics tasks that involve up to eight steps “but I think we could go even further,” Dalal says.

Likewise, gathering data through demonstrations could enable this approach to be extended to objects that can’t currently be simulated, such as soft and flexible objects.

“There’s so much more to explore with ManipGen. The foundation we’ve built through this project opens up exciting possibilities for future advancements in robotic manipulation and brings us closer to the goal of developing generalist robots,” says Min Liu, a master’s student in the machine learning department and co-lead on the project.

“ManipGen really demonstrates the strength of simulation-to-reality transfer as a paradigm for producing robots that can generalize broadly, something we have seen in locomotion, but until now, not for general manipulation,” says Deepak Pathak, an assistant professor of computer science in the Robotics Institute.

ManipGen builds on research to enable robots to solve longer and more complicated tasks, says Ruslan Salakhutdinov, the principal investigator on the project and professor of computer science in the machine learning department.

“In this iteration,” he says, “we finally show the exciting culmination of years of work: an agent that can generalize and solve an enormous array of tasks in the real world.”

Dalal and Liu outline ManipGen in a newly released research paper.

Source: Carnegie Mellon University