A new sign language translator technology is non-invasive and as portable as a tube of Chapstick, researchers report.
“We are providing a ubiquitous solution to sign language translation,” says Mi Zhang, assistant professor of electrical and computer engineering at Michigan State University.
“Hard-of-hearing individuals who need to communicate with someone who doesn’t understand sign language can have a personalized, virtual interpreter at anytime, anywhere.”
Hundreds of thousands of hard-of-hearing people rely upon American Sign Language, or ASL, to communicate. Without an interpreter present, they don’t have the same employment opportunities and are often at a disadvantage in delicate or sensitive situations, Zhang says.
“Think about if you were in the hospital and needed to communicate with a doctor. You would have to wait for the hospital’s translator—if they have one—to arrive, connect with a toll-free service or rely on a family member to be present,” Zhang says.
“This compromises your privacy and could worsen a health emergency. This is just one example demonstrating the critical need for sign language translation technology.”
Zhang and colleagues saw an opportunity to help the hard-of-hearing population break through this communication barrier.
Zhang’s technology, called DeepASL, features a deep learning—or machine learning based on data inspired by the structure and function of the brain—algorithm that automatically translates signs into English. The technology uses a three-inch sensory device, which Leap Motion developed, that is equipped with cameras to capture hand and finger motions continuously.
“Leap Motion converts the motions of one’s hands and fingers into skeleton-like joints. Our deep learning algorithm picks up data from the skeleton-like joints and matches it to signs of ASL,” says doctoral student Biyi Fang.
“Other translators are word-for-word, requiring users to pause between signs. This limitation significantly slows down face-to-face conversations…”
Similar to setting up Siri on an iPhone, users sign certain words to familiarize their hands and joints to the technology and sensors. They also can create custom signs for their names or non-dictionary words by spelling them out, and have more ease and comfort communicating.
“One differentiating feature of DeepASL is that it can translate full sentences without needing users to pause after each sign. Other translators are word-for-word, requiring users to pause between signs. This limitation significantly slows down face-to-face conversations, making conversations difficult and awkward,” Fang says.
“Our technology is also non-intrusive, unlike other interpreter technologies that require signers to wear gloves, making them feel marginalized because you can literally see their disability.”
Beyond its ability to help the hard-of-hearing communicate to others, DeepASL can help those virtually learning ASL by giving real-time feedback on their signing. Prior technology through video tutorials had limited personal assistance, Zhang explains.
“About 90 percent of deaf children are born to hearing parents. These parents are learning sign language to communicate with their children but don’t usually have time to attend live classes,” Zhang says. “Our technology can gauge their signing to help them learn and improve.”
While the technology translates sign language to verbal conversation, Zhang says that existing, successful speech recognition technologies can help the other side of the conversation, allowing verbal communicators to speak with the hard-of-hearing.
The next step for the technology is commercialization, making it available to the hundreds of thousands of people who need a more accessible interpreter, the researchers say. Leap Motion retails for about $78. The researchers also plan to make the technology compatible with iPhones, and plan to teach it different sign languages.
Source: Michigan State University