CARNEGIE MELLON / UC BERKELEY (US) — After six months of computing time, researchers are pretty sure they’ve simulated almost every important configuration of a piece of cloth over a moving human figure.
“I believe our approach generates the most beautiful and realistic cloth of any real-time technique,” says Adrien Treuille, associate professor of computer science and robotics at Carnegie Mellon University.
To create this cloth database, the team took advantage of the immense computing power available in the cloud, ultimately using 4,554 central processing unit (CPU) hours to generate 33 gigabytes of data.
In the simulations in this study, the researchers focused on secondary cloth effects—how clothing responds to both the human figure wearing the clothes, as well as to the dynamic state of the cloth itself. (Credit: Carnegie Mellon)
Treuille says this presents a new paradigm for computer graphics, in which it will be possible to provide real-time simulation for virtually any complex phenomenon, whether it’s a naturally flowing robe or a team of galloping horses.
Doyub Kim, a former post-doctoral researcher at Carnegie Mellon, presented the team’s findings earlier this week at SIGGRAPH 2013, the International Conference on Computer Graphics and Interactive Techniques, in Anaheim, California.
Real-time animations of complex phenomena for video games or other interactive media are challenging. A massive amount of computation is necessary to simulate the behavior of some elements, such as cloth, while good computer models simply don’t exist for such things as body motion.
Nevertheless, data-driven techniques have made complex animations possible on ordinary computers by pre-computing many possible configurations and motions.
“The criticism of data-driven techniques has always been that you can’t pre-compute everything,” Treuille says. “Well, that may have been true 10 years ago, but that’s not the way the world is anymore.”
Today, massive computing power can be accessed online at relatively low cost through services such as Amazon. Even if everything can’t be pre-computed, the researchers set out to see just how much was possible by leveraging cloud computing resources.
In the simulations in this study, the researchers focused on secondary cloth effects—how clothing responds to both the human figure wearing the clothes, as well as to the dynamic state of the cloth itself.
Kim says to explore this highly complex system, the researchers developed an iterative technique that continuously samples the cloth motions, automatically detecting areas where data is lacking or where errors occur.
For instance, in the study simulations, a human figure wore the cloth as a hooded robe; after some gyrations that caused the hood to fall down, the animation would show the hood popping back onto the figure’s head for no apparent reason. The team’s algorithm automatically identified such errors and explored the dynamics of the system until the error was eliminated.
Kim says with many video games now online, it would be possible to use such techniques to continually improve the animation of games. As play progresses and the animation encounters errors or unforeseen motions, it may be possible for a system to automatically explore those dynamics and make necessary additions or corrections.
Though the research yielded a massive database for the cloth effects, Kim says it was possible to use conventional techniques to compress the tens of gigabytes of raw data into tens of megabytes, a more manageable file size that nevertheless preserved the richness of the animation.
In addition to Treuille and Kim, the research team included assistant professor of computer science Kayvon Fatahalian, and, from University of California, Berkeley, James F. O’Brien, professor of computer science and engineering, Woojong Koh, a Ph.D. student, and Rahul Narain, a post-doctoral researcher.
The Intel Science and Technology Center for Visual Computing, the National Science Foundation, the UC Lab Fees Research Program, a Samsung Scholarship, and gifts from Google, Qualcomm, Adobe, Pixar, and the Okawa Foundation funded the research.
Source: Carnegie Mellon University