View more articles about

Mobile sign language that’s 3G friendly

U. WASHINGTON-SEATTLE (US)—Engineers are developing the first device able to transmit American Sign Language over U.S. cellular networks.

The tool is just completing its initial field test by participants in a University of Washington summer program for deaf and hard-of-hearing students.

“This is the first study of how deaf people in the United States use mobile video phones,” says project leader Eve Riskin, a professor of electrical engineering.

The MobileASL team has been working to optimize compressed video signals for sign language. By increasing image quality around the face and hands, researchers have brought the data rate down to 30 kilobits per second (a rate that can be handled by a 3G network) while still delivering intelligible sign language.

MobileASL also uses motion detection to identify whether a person is signing or not in order to extend the phones’ battery life during video use.

Transmitting sign language as efficiently as possible increases affordability, improves reliability on slower networks, and extends battery life, even on devices that might have the capacity to deliver higher quality video.

This summer’s field test is allowing the team to see how people use the tool in their daily lives and what obstacles they encounter. Most study participants say texting or e-mail is currently their preferred method for distance communication. Their experiences with the MobileASL phone are, in general, positive.

“It is good for fast communication,” says Tong Song, a Chinese national who is studying at Gallaudet University in Washington, D.C. “Texting sometimes is very slow, because you send the message and you’re not sure that the person is going to get it right away. If you’re using this kind of phone then you’re either able to get in touch with the person or not right away, and you can save a lot of time.”

Josiah Cheslik, an undergraduate and a teaching assistant, agreed.

“Texting is for short things, like ‘I’m here,’ or, ‘What do you need at the grocery store?'” he says. “This is like making a real phone call.”

Text-based communication can also lead to mix-ups.

“Sometimes with texting people will be confused about what it really means,” Song says. “With the MobileASL phone people can see each other eye to eye, face to face, and really have better understanding.”

Some students also use video chat on a laptop, home computer, or video phone terminal, but none of these existing technologies for transmitting sign language fits in your pocket.

Cheslik recounts that during the study one participant was lost riding a Seattle city bus and the two were able to communicate using MobileASL. The student on the bus described what he was seeing and Cheslik helped him navigate where he wanted to go.

Newly released high-end phones, such as the iPhone 4 and the HTC Evo, offer video conferencing. But users are already running into hitches—broadband companies have blocked the bandwidth-hogging video conferencing from their networks, and are rolling out tiered pricing plans that would charge more to heavy data users.

The research team estimates that iPhone’s FaceTime video conferencing service uses nearly 10 times the bandwidth of MobileASL. Even after the anticipated release of an iPhone app to transmit sign language, people would need to own an iPhone 4 and be in an area with very fast network speeds in order to use the service.

The MobileASL system could be integrated with the iPhone 4, the HTC Evo, or any device that has a video camera on the same side as the screen.

“We want to deliver affordable, reliable ASL on as many devices as possible,” Riskin says. “It’s a question of equal access to mobile communication technology.”

More news from the University of Washington:

Related Articles