Researchers have developed a physics-based, real-time method for controlling animated characters that can learn basketball dribbling skills from experience. In this case, the system learns from motion capture of the movements that people dribbling basketballs performed.
This trial-and-error learning process is time consuming, requiring millions of trials, but the results are arm movements that are closely coordinate with physically plausible ball movement.
Players learn to dribble between their legs, dribble behind their backs, and do crossover moves, as well as how to transition from one skill to another.
“Once the skills are learned, new motions can be simulated much faster than real-time,” says Jessica Hodgins, professor of computer science and robotics at Carnegie Mellon University.
Hodgins and Libin Liu, chief scientist at DeepMotion Inc., a California company that develops smart avatars, will present the method at SIGGRAPH 2018, the Conference on Computer Graphics and Interactive Techniques in Vancouver.
“This research opens the door to simulating sports with skilled virtual avatars,” says Liu, the report’s first author. “The technology can be applied beyond sport simulation to create more interactive characters for gaming, animation, motion analysis, and in the future, robotics.”
A physics-based method has the potential to create more realistic games, but getting the subtle details right is difficult.
Motion capture data already add realism to state-of-the-art video games. But these games also include disconcerting artifacts, Liu notes, such as balls that follow impossible trajectories or that seem to stick to a player’s hand.
A physics-based method has the potential to create more realistic games, but getting the subtle details right is difficult. That’s especially so for dribbling a basketball because player contact with the ball is brief and finger position is critical. Some details, such as the way a ball may continue spinning briefly when it makes light contact with the player’s hands, are tough to reproduce. And once the ball is released, the player has to anticipate when and where the ball will return.
Liu and Hodgins opted to use deep reinforcement learning to enable the model to pick up these important details. Artificial intelligence programs have used this form of deep learning to figure out a variety of video games. The AlphaGo program famously employed it to master the board game Go.
The motion capture data used as input was of people doing things such as rotating the ball around the waist, dribbling while running, and dribbling in place both with the right hand and while switching hands.
This capture data did not include the ball movement, which Liu explains is difficult to record accurately. Instead, they used trajectory optimization to calculate the ball’s most likely paths for a given hand motion.
The program learned the skills in two stages—first it mastered locomotion and then learned how to control the arms and hands and, through them, the motion of the ball. This decoupled approach is sufficient for actions such as dribbling or perhaps juggling, where the interaction between the character and the object doesn’t have an effect on the character’s balance.
Further work is required to address sports such as soccer, where balance is tightly coupled with game maneuvers, Liu says.
Source: Carnegie Mellon University