Scenes of people getting dressed are noticeably absent from animated films because it’s incredibly difficult to make them look realistic. The problem centers on manipulating simulated cloth.
Researchers at the Georgia Institute of Technology have produced a systematic tool that allows animators to create realistic motion for virtual humans who are getting dressed.
The new algorithm enables virtual characters to intelligently manipulate simulated cloth to achieve the task of dressing with different dressing styles for various types of garment and fabric.
The tool could help computer animators create scenes similar to live-action movies with iconic clothing, such as the “jacket on, jacket off” drill in 2010’s “The Karate Kid” or Spiderman pulling his mask over his head for the first time.
The research team’s long-term goal is to develop assistive technologies that would enable robots of the future to help disabled or elderly adults with self care, such as getting dressed.
“Dressing is one of the activities of daily living that the health care community has identified as being important for independent living,” says Karen Liu, a researcher on the paper and associate professor in the School of Interactive Computing. “The challenge of learning to dress at a young age or for some older adults and those with disabilities is mainly due to the combined difficulty in coordinating different body parts and manipulating soft and deformable objects.”
“Animating Human Dressing” is a technical paper at SIGGRAPH 2015, the ACM conference on Computer Graphics and Interactive Techniques, August 9-13 in Los Angeles.
Source: Georgia Tech