Archives

  • Overconfident CEOs are less likely to delegate
  • Play Video

    Watch: New tool makes robots way faster at tricky tasks

    (Credit: Getty Images)

    Pancake-flipping robots could be just around the corner thanks to a new robot learning system.

    Robots are increasingly learning new skills by watching people. From folding laundry to handling food, many real-world, humanlike tasks are too nuanced to be efficiently programmed step by step.

    With imitation learning, humans demonstrate a task and robots learn to copy what they see through cameras and sensors. While at the leading edge of robotics research, this approach is limited by a major constraint: Robots can only work as fast as the people who taught them.

    Now, researchers have created a tool that smashes that speed barrier. The system allows robots to execute complex tasks significantly faster than human demonstrations while maintaining precision, control, and safety.

    The team addresses a central challenge in modern robotics: how to combine the flexibility of learning from humans with the speed and reliability required for real-world deployment. The technology could lead to wider adoption of imitation learning in industrial and household applications and even enable robots to execute humanlike tasks better than ever before.

    “The thing we’re trying to create—and I would argue industry is also trying to create—is a general-purpose robot that can do any task that human hands can do,” says Shreyas Kousik, assistant professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech and a co-lead author on the study. “To make that work outside the lab, speed really matters.”

    The new tool is called SAIL (Speed Adaptation for Imitation Learning).

    Teaching robots to work faster than the speed of human demonstrations is challenging. Robots can behave differently at higher speeds, and small changes in the environment can cause errors.

    “The challenge is that a robot is limited to the data it was trained on, and any changes in the environment can cause it to fail,” Kousik says.

    SAIL addresses this challenge through a modular approach, with separate components working together to accelerate beyond the training data. The system keeps motions smooth at high speed, tracks movements accurately, adjusts speed dynamically based on task complexity, and schedules actions to account for hardware delays. This combination allows robots to move quickly while staying stable, coordinated, and precise.

    “One of the gaps we saw was that our academic robotics systems could do impressive things, but they weren’t fast or robust enough for practical use,” says Benjamin Joffe, senior research scientist at the Georgia Tech Research Institute. “We wanted to study that gap carefully and design a system that addressed it end to end.”

    He adds, “The goal is not just to make robots faster, but to make them smart enough to know when speed helps and when it could cause mistakes.”

    The team evaluated SAIL’s performance across 12 tasks, both in simulation and on two physical robot platforms. Tasks included stacking cups, folding cloth, plating fruit, packing food items, and wiping a whiteboard. In most cases, SAIL-enabled robots completed tasks three to four times faster than standard imitation-learning systems without losing accuracy.

    One exception was the whiteboard-wiping task, where maintaining contact made high-speed execution difficult.

    “Understanding where speed helps and where it hurts is critical,” Kousik says. “Sometimes slowing down is the right decision.”

    While SAIL does not make robots universally adaptable on its own, it represents an important step toward robotic systems that can learn from humans without being constrained by human pace.

    By showing how learned robotic behaviors can be accelerated safely and systematically, SAIL brings imitation learning closer to real-world use—where speed, precision, and reliability all matter.

    The researchers presented their work at the Conference on Robot Learning (CoRL).

    Funding for the work came from the State of Georgia and the Agricultural Technology Research Program at Georgia Tech.

    Source: Georgia Tech

    Play Video

    Robot trained on surgery videos performs as well as human docs

    (Credit: Johns Hopkins)

    A robot, trained for the first time by watching videos of seasoned surgeons, executed the same surgical procedures as skillfully as the human doctors.

    The successful use of imitation learning to train surgical robots eliminates the need to program robots with each individual move required during a medical procedure and brings the field of robotic surgery closer to true autonomy, where robots could perform complex surgeries without human help.

    The findings, led by Johns Hopkins University researchers, are being spotlighted this week at the Conference on Robot Learning in Munich.

    “It’s really magical to have this model and all we do is feed it camera input and it can predict the robotic movements needed for surgery,” says senior author Axel Krieger, an assistant professor in Johns Hopkins University’s mechanical engineering department. “We believe this marks a significant step forward toward a new frontier in medical robotics.”

    The researchers used imitation learning to train the da Vinci Surgical System robot to perform three fundamental tasks required in surgical procedures: manipulating a needle, lifting body tissue, and suturing. In each case, the robot trained on the team’s model performed the same surgical procedures as skillfully as human doctors.

    The model combined imitation learning with the same machine learning architecture that underpins ChatGPT. However, where ChatGPT works with words and text, this model speaks “robot” with kinematics, a language that breaks down the angles of robotic motion into math.

    The researchers fed their model hundreds of videos recorded from wrist cameras placed on the arms of da Vinci robots during surgical procedures. These videos, recorded by surgeons all over the world, are used for post-operative analysis and then archived. Nearly 7,000 da Vinci robots are used worldwide, and more than 50,000 surgeons are trained on the system, creating a large archive of data for robots to “imitate.”

    While the da Vinci system is widely used, researchers say it’s notoriously imprecise. But the team found a way to make the flawed input work. The key was training the model to perform relative movements rather than absolute actions, which are inaccurate.

    “All we need is image input and then this AI system finds the right action,” says lead author Ji Woong “Brian” Kim, a postdoctoral researcher at Johns Hopkins. “We find that even with a few hundred demos, the model is able to learn the procedure and generalize new environments it hasn’t encountered.”

    “The model is so good learning things we haven’t taught it,” adds Krieger. “Like if it drops the needle, it will automatically pick it up and continue. This isn’t something I taught it do.”

    The model could be used to quickly train a robot to perform any type of surgical procedure, the researchers say. The team is now using imitation learning to train a robot to perform not just small surgical tasks but a full surgery.

    Before this advancement, programming a robot to perform even a simple aspect of a surgery required hand-coding every step. Someone might spend a decade trying to model suturing, Krieger says. And that’s suturing for just one type of surgery.

    “It’s very limiting,” Krieger says. “What is new here is we only have to collect imitation learning of different procedures, and we can train a robot to learn it in a couple days. It allows us to accelerate to the goal of autonomy while reducing medical errors and achieving more accurate surgery.”

    Additional authors are from Johns Hopkins and Stanford University.

    Source: Johns Hopkins University