Archives

Play Video

Wearable device lets people with visual impairment ‘see’ stuff

Suranga Nanayakkara (left) with Mark Myres (right), who tested AiSee as a visually impaired user. (Credit: NUS)

A new wearable device called AiSee helps people with visual impairment “see” objects around them with the help of artificial intelligence.

People with visual impairment face daily hurdles, particularly with object identification which is crucial for both simple and complex decision-making. While breakthroughs in AI have dramatically improved visual recognition capabilities, real-world application of these advanced technologies remains challenging and error-prone.

AiSee, which was first developed in 2018 and progressively upgraded over a span of five years, aims to overcome these limitations by leveraging state-of-the-art AI technologies.

The device sits on a white table. Users perch the device on their ears and it wraps around the back of the head.
AiSee: The AI-powered “eye” for visually impaired people to “see” objects around them. (Credit: NUS)

“With AiSee, our aim is to empower users with more natural interaction. By following a human-centered design process, we found reasons to question the typical approach of using glasses augmented with a camera,” says Suranga Nanayakkara, associate professor in the information systems and analytics department at National University of Singapore Computing and lead researcher of Project AiSee.

“People with visual impairment may be reluctant to wear glasses to avoid stigmatization. Therefore, we are proposing an alternative hardware that incorporates a discreet bone conduction headphone.”

The user simply needs to hold an object and activate the in-built camera to capture an image of the object. With the help of AI, AiSee

To work, AiSee comprises three key components:

1. The eye: Vision engine computer software

AiSee incorporates a micro-camera that captures the user’s field of view. This forms the software component of AiSee, also referred to as the “vision engine computer.” The software is capable of extracting features such as text, logos, and labels from the captured image for processing.

2. The brain: AI-powered image processing unit and interactive Q&A system

After the user snaps a photo of the object of interest, AiSee utilizes sophisticated cloud-based AI algorithms to process and analyze the captured images to identify the object. The user can also ask a range of questions to find out more about the object.

AiSee uses advanced text-to-speech and speech-to-text recognition and processing technology to identify objects and comprehend the user’s questions. Powered by a large language model, AiSee excels in interactive question-and-answer exchanges, enabling the system to accurately comprehend and respond to the user’s queries in a prompt and informative manner.

Compared to most wearable assistive devices which require smartphone pairing, AiSee operates as a self-contained system that can function independently without the need for any additional devices.

3. The speaker: Bone conduction sound system

The headphone of AiSee utilizes bone conduction technology, which enables sound transmission through the bones of the skull. This ensures that individuals with visual impairment can effectively receive auditory information while still having access to external sounds, such as conversations or traffic noise. This is particularly vital for visually impaired people as environmental sounds provide essential information for decision-making, especially in situations involving safety considerations.

“At present, visually impaired people in Singapore do not have access to assistive AI technology of this level of sophistication. Therefore, we believe that AiSee has the potential to empower visually impaired people to independently accomplish tasks that currently require assistance,” Nanayakkara says. “Our next step is to make AiSee affordable and accessible to the masses. To achieve this, we are making further enhancements, including a more ergonomic design and a faster processing unit.”

“A lot of time, assistive devices seem very targeted at totally blind people or visually impaired people. I think AiSee is a good balance. Both visually impaired and blind people could get a lot of benefits from this,” says NUS student Mark Myres, who helped to test AiSee as a visually impaired user.

Nanayakkara and his team are currently in discussions with SG Enable in Singapore to conduct user testing with people with visual impairment. The findings will help to refine and improve AiSee’s features and performance.

Source: NUS

  • How visually impaired people ease social awkwardness
  • Play Video

    Sleeve system lets users ‘read’ messages through touch

    (Credit: Larm Rmah/Unsplash)

    Researchers have created a method for haptic communications that lets users receive messages through the skin on the forearm by learning to interpret signals such as a buzzing sensation.

    Hong Tan, founder and director of the Haptic Interface Research Laboratory at Purdue University, says that, while the research lends itself to use by hearing-impaired and visually impaired users, the method could work well for any number of possible uses.

    “We are collaborating with Facebook through the company’s Sponsored Academic Research Agreement. Facebook is interested in developing new platforms for communication and the haptic research we are doing has been promising,” she says.

    haptic communications sleeve
    Postdoctoral student Yang Jiao communicates words to Jaeong Jung, an undergraduate student, using phoneme signals transmitted to the haptic device on his forearm. (Credit: Brian Huchel/Purdue)

    “I’m excited about this… imagine a future where you’re able to wear a sleeve that discreetly sends messages to you—through your skin—in times when it may be inconvenient to look at a text message,” Tan says. “I’m really hoping this takes off as a general idea for a new way to communicate. When that happens, the hearing-impaired, the visually-impaired, everyone can benefit.”

    How it works

    In the study, subjects used a material cuff encircling the forearm from the wrist to below the elbow. The instrument, wrapped around the test subject’s non-dominant arm, featured 24 tactors that, when stimulated, emitted a vibration against the skin, changing quality and position in the process.

    Tan says the researchers mapped the 39 phonemes (units of sound in a language that distinguish one word from another) in the English language using signals from specific tactors. They made sounds of consonants such as K, P, and T stationary sensations on different areas of the arm and vowels indicated by stimulations that moved up, down, or around the forearm.

    “We used anything that can help you establish the mapping and recognize and memorize it,” Tan says. “This is based on a better understanding of how to transmit information through the sense of touch in a more intuitive and effective manner.”

    Twelve subjects learned haptic symbols through the phoneme-based method at a schedule of 100 words in 100 minutes for the research, while 12 others learned using a word-based system with the haptic signals on their arm.

    Research results show the phonemes worked better than a word-based approach by providing a more consistent path for user learning in a shorter period of time. Performance levels varied greatly among the test subjects for each method, but with the phoneme-based approach at least half could perform at 80 percent accuracy while two subjects reached 90 percent accuracy.

    Learning is key

    Tan says using phonemes was more efficient compared to letters, noting there are less phonemes in a word compared with the number of letters.

    She says the reasoning behind the project is the thought that there are many ways to encode speech into feelings. The project goal is to demonstrate one way that actually works.

    Keyboard tech speeds browsing for blind internet users

    “For this research, the learning progress is one of the key things,” Tan says. “With this, not only do we have a system that works, but we’re able to train people within hours rather than months or even years.”

    For the study, the researchers developed a concentrated, intense training regimen where the test subjects worked for about 10 minutes a day. The researchers then tested the participants on their progress.

    “It is more efficient than if they sit here for three hours to study,” Tan says. “You can’t keep good concentration for that long.”

    “People who are hearing-impaired may be motivated to spend additional time to training themselves, but the general public probably doesn’t have the patience,” she adds.

    The researchers presented their work at the EuroHaptics 2018 conference in Pisa, Italy. Facebook Inc. funded the research.

    Source: Purdue University