Archives

  • If Venus had plate tectonics, could it have had life?
  • Play Video

    ‘BrushLens’ smartphone case boosts touchscreen accessibility

    BrushLens helped people with visual impairments locate items on a touch screen menu in study trials. (Credit: Chen Liang)

    A new smartphone case could soon help people with visual impairments, tremors, and spasms use touchscreens independently.

    The case, dubbed BrushLens, could let users perceive, locate, and tap buttons and keys on the touchscreen menus now ubiquitous in restaurant kiosks, ATM machines, and other public terminals.

    “So many technologies around us require some assumptions about users’ abilities, but seemingly intuitive interactions can actually be challenging for people,” says study first author Chen Liang, a doctoral student in computer science and engineering at the University of Michigan.

    “People have to be able to operate these inaccessible touchscreens in the world. Our goal is to make that technology accessible to everyone,” Liang says.

    Liang works in the lab of Anhong Guo, assistant professor of computer science and engineering. Guo led the development of BrushLens with Alanson Sample, an associate professor in the same department.

    Users can comb through a touchscreen interface by holding a phone connected to BrushLens against a touchscreen and dragging the phone across the screen. The phone sees what’s on the screen with its camera then reads the options aloud by harnessing the phone’s built-in screen readers. Users indicate their menu choice through screen readers or an enlarged, easy-to-tap button in the BrushLens app.

    “I could actually see myself accomplishing something that I otherwise thought impossible.”

    When given a target, BrushLens divides the screen into a grid, then guides the user’s hand toward the section of the screen containing their menu choice by saying the coordinates of both the target and device. Once those coordinates overlap, pushbuttons or autoclickers on the underside of the phone case tap the screen for the user, depending on the model.

    “The user doesn’t have to precisely locate where the button is and perform the touch gesture,” Liang says.

    Ten study participants, six with visual impairments and four with tremors or spasms, tested the hardware and app.

    “As a blind person, touchscreens are pretty much inaccessible to me unless I have some help or I can plug headphones into the kiosk,” says study participant Sam Rau. “Somebody else has to order for you, or they have to help you out with it. I don’t want to be in a situation where I always have to rely on the kindness of others.”

    It took some time for Rau to figure BrushLens out, but once he became familiar with the device, he was excited by the tool’s potential.

    “I thought about myself going into a Panera Bread and being able to order from the kiosk,” Rau says. “I could actually see myself accomplishing something that I otherwise thought impossible.”

    Likewise, BrushLens worked as intended for users whose tremors or spasms cause them to make unwanted selections on touchscreens. For one participant with cerebral palsy, BrushLens improved their accuracy by nearly 74%.

    The inventors of BrushLens recently applied for a patent with the help of Innovation Partnerships, the University of Michigan’s -M’s central hub for research commercialization. The team hopes to bring the product to users as an affordable phone accessory.

    “The parts that we used are relatively affordable. Each clicker costs only $1,” Liang says. “The whole device is definitely under $50, and that’s a conservative estimate.”

    The team plans to further streamline their design so that it easily fits in a pocket. Offloading the battery and processing to the phone, for example, could make the design cheaper and less bulky.

    “It doesn’t have to be much more complex than a TV remote,” says coauthor Yasha Iravantchi, a doctoral student in computer science and engineering.

    The companion app could also be improved by allowing users to directly interface with it via voice commands, Liang says.

    Participants were enrolled in the trial study with the help of the Disability Network, the University of Michigan Council for Disability Concerns, and the James Weiland research group in the UM biomedical engineering department. A Google Research Scholar Award funded the work.

    Liang will demo BrushLens at the Association for Computing Machinery Symposium on User Interface Software and Technology in San Francisco.

    Source: Derek Smith for University of Michigan

    Play Video

    VoxLens makes interactive data more accessible for screen readers

    "Right now, screen-reader users either get very little or no information about online visualizations, which, in light of the COVID-19 pandemic, can sometimes be a matter of life and death," says lead author Ather Sharif. (Credit: Elizabeth Woolner/Unsplash)

    VoxLens is a JavaScript plugin that, with one additional line of code, allows people who use screen readers to interact with visualizations.

    Interactive visualizations have changed the way we understand our lives. For example, they can showcase the number of coronavirus infections in each state.

    But these graphics often are not accessible to people who use screen readers, software programs that scan the contents of a computer screen and make the contents available via a synthesized voice or Braille.

    “This work is part of a much larger agenda for us—removing bias in design.”

    Millions of Americans use screen readers for a variety of reasons, including complete or partial blindness, learning disabilities, or motion sensitivity.

    VoxLens users can gain a high-level summary of the information described in a graph, listen to a graph translated into sound, or use voice-activated commands to ask specific questions about the data, such as the mean or the minimum value, its creators report.

    “If I’m looking at a graph, I can pull out whatever information I am interested in, maybe it’s the overall trend or maybe it’s the maximum,” says lead author Ather Sharif, a doctoral student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington.

    “Right now, screen-reader users either get very little or no information about online visualizations, which, in light of the COVID-19 pandemic, can sometimes be a matter of life and death. The goal of our project is to give screen-reader users a platform where they can extract as much or as little information as they want.”

    “In the field of accessibility, it’s really important to follow the principle of ‘nothing about us without us.'”

    Screen readers can inform users about the text on a screen because it’s what researchers call “one-dimensional information.”

    “There is a start and an end of a sentence and everything else comes in between,” says co-senior author Jacob O. Wobbrock, professor in the Information School. “But as soon as you move things into two dimensional spaces, such as visualizations, there’s no clear start and finish. It’s just not structured in the same way, which means there’s no obvious entry point or sequencing for screen readers.”

    To begin the project, the team worked with five screen-reader users with partial or complete blindness to figure out how a potential tool could work.

    “In the field of accessibility, it’s really important to follow the principle of ‘nothing about us without us,'” Sharif says. “We’re not going to build something and then see how it works. We’re going to build it taking users’ feedback into account. We want to build what they need.”

    To implement VoxLens, visualization designers only need to add a single line of code.

    “We didn’t want people to jump from one visualization to another and experience inconsistent information,” Sharif says. “We made VoxLens a public library, which means that you’re going to hear the same kind of summary for all visualizations. Designers can just add that one line of code and then we do the rest.”

    The researchers evaluated VoxLens by recruiting 22 screen-reader users who were either completely or partially blind. Participants learned how to use VoxLens and then completed nine tasks, each of which involved answering questions about a visualization.

    Compared to participants from a previous study who did not have access to this tool, VoxLens users completed the tasks with 122% increased accuracy and 36% decreased interaction time.

    “We want people to interact with a graph as much as they want, but we also don’t want them to spend an hour trying to find what the maximum is,” Sharif says. “In our study, interaction time refers to how long it takes to extract information, and that’s why reducing it is a good thing.”

    The team also interviewed six participants about their experiences.

    “We wanted to make sure that these accuracy and interaction time numbers we saw were reflected in how the participants were feeling about VoxLens,” Sharif says. “We got really positive feedback. Someone told us they’ve been trying to access visualizations for the past 12 years and this was the first time they were able to do so easily.”

    Right now, VoxLens only works for visualizations that are created using JavaScript libraries, such as D3, chart.js, or Google Sheets. But the team is working on expanding to other popular visualization platforms. The researchers also acknowledged that the voice-recognition system can be frustrating to use.

    “This work is part of a much larger agenda for us—removing bias in design,” says co-senior author Katharina Reinecke, associate professor in the Allen School. “When we build technology, we tend to think of people who are like us and who have the same abilities as we do.

    “For example, D3 has really revolutionized access to visualizations online and improved how people can understand information. But there are values ingrained in it and people are left out. It’s really important that we start thinking more about how to make technology useful for everybody.”

    The team presented their project at CHI 2022 in New Orleans.

    The Mani Charitable Foundation, the University of Washington Center for an Informed Public, and the University of Washington Center for Research and Education on Accessible Technology and Experiences funded the work.

    Source: University of Washington