hand_touchscreen_1

Wave hand. Turn any surface into a touchscreen

CARNEGIE MELLON (US) — New technology makes it possible to create touch-based interfaces almost at will, with just the swipe of your hand.

The system goes beyond previous work that allowed a depth camera system such as Kinect to be combined with a projector to turn almost any surface into a touchscreen.

The WorldKit system enables someone to rub the arm of a sofa to “paint” a remote control for the TV or swipe a hand across an office door to post a calendar from which subsequent users can “pull down” an extended version. These ad hoc interfaces can be moved, modified, or deleted with similar gestures, making them highly personalized.


“Depth sensors are getting better and projectors just keep getting smaller. We envision an interactive ‘light bulb’—a miniaturized device that could be screwed into an ordinary light fixture and pointed or moved to wherever an interface is needed,” says doctoral student Robert Xiao. (Credit: Chris Harrison)

Researchers at Carnegie Mellon’s Human-Computer Interaction Institute (HCII) used a ceiling-mounted camera and projector to record room geometries, sense hand gestures, and project images on desired surfaces.

But Robert Xiao, an HCII doctoral student, says WorldKit doesn’t require such an elaborate installation. “Depth sensors are getting better and projectors just keep getting smaller. We envision an interactive ‘light bulb’—a miniaturized device that could be screwed into an ordinary light fixture and pointed or moved to wherever an interface is needed.”

The system doesn’t require prior calibration, automatically adjusting its sensing and image projection to the orientation of the chosen surface. Users can summon switches, message boards, indicator lights, and a variety of other interface designs from a menu. Ultimately, the WorldKit team anticipates that users will be able to custom design interfaces with gestures.

Xiao developed WorldKit with Scott Hudson, an HCII professor, and Chris Harrison, a PhD student. They will present their findings April 30 at CHI 2013, the Conference on Human Factors in Computing Systems in Paris.

“People have talked about creating smart environments, where sensors, displays, and computers are interwoven,” says Harrison, who will join the HCII faculty this summer.

“But usually, that doesn’t amount to much besides mounting a camera up on the ceiling. The room may be smart, but it has no outlet for that smartness. With WorldKit, we say forget touchscreens and go straight to projectors, which can make the room truly interactive.”

Though WorldKit now focuses on interacting with surfaces, the researchers anticipate future work may enable users to interact with the system in free space. Likewise, higher resolution depth cameras may someday enable the system to sense detailed finger gestures. In addition to gestures, the system also could be designed to respond to voice commands.

“We’re only just getting to the point where we’re considering the larger questions,” Harrison says, noting a multitude of applications in the home, office, hospitals, nursing homes, and schools have yet to be explored.

This work was sponsored in part by a Qualcomm Innovation Fellowship, a Microsoft PhD Fellowship, and grants from the Heinz College Center for the Future of Work, the Natural Sciences and Engineering Research Council of Canada, and the National Science Foundation.

Source: Carnegie Mellon University

chat2 Comments

You are free to share this article under the Creative Commons Attribution-NoDerivs 3.0 Unported license.

2 Comments

  1. Brandon H

    I see this concept being extremely useful. However much more marketable and successful in deployment with a few minor perspective changes. For example: technology evolves through the changes of individuals, nit environments, companies, etc. Proof of this of this is the company Citrix. One example of their work is taking an outdated and archaic setup/program/GUI and making it use it in today’s workplaces. NYSDMV runs Windows 7 with a Citrix engine to emulate a dos program written decades ago. Why? Large forces, major corporations, governmental organizations, manufacturing companies, do not want to retrain their workforces. So instead of designing this revolutionary equipment to be installed in a light fixture (or similar), which denotes someone other than an individual being the primary force of this change, design this to be portable & made for individual use that is similarly networked. Why have that projector installed in a light fixture when you could take it with you? Wear it even? Integrate it with a Bluetooth headset, for example, and there’s no need for tablets/iPads.
    If everyone at Google began coming to work with their Google Glass(es) retrofitted with this, the company would adapt on the simple need to improve efficiency alone. Take the same system, store it on a cloud-based system with different settings for home/work/car/etc and separate the users and accesses at each level (similar to access levels in security clearance or, more simplistically, making a Facebook post private/public/only viewable to______) would control all data needs. Though it may not yet be compact enough yet, draw too much power, or even may not be as graphically advanced as some devices (3D TVs/HDTVs/high end gaming display or computational power), I personally believe this to be both attainable and the direction which could make or break the success of this wonderful concept.

  2. Brandon H

    Apologies…. “…..useful……”

We respect your privacy.