Real-time action in a virtual world


Unlike the 3-D polygon models in video games or digitally animated films, this virtual environment records real-time actions. Below, the remote capabilities of the system allow multiple sites to interact in the same virtual environment—for instance, a coach monitoring practice from a remote location or a player being able to see himself from multiple angles in order to better hone his skills.

U. ILLINOIS (US)—A new digital system allows people in different locations to interact in real time in a shared virtual space.

The “tele-immersive environment” captures, transmits, and displays three-dimensional movement in real time, says project leader Peter Bajcsy, adjunct assistant professor of electrical and computer engineering and computer science at the University of Illinois.

Unlike the virtual reality people see in video games or in digitally animated films, these virtual environments record real-time actions.


“It’s a virtual environment that is the product of real-time imaging, not the result of programming 3-D CAD models.” Bajcsy says.

“Nobody has to be supplied with equipment to enable imaging and 3-D reconstruction. The only thing you might have is some kind of controller, like a Wii controller, so you can change the view angle of the data you see.”

Clusters of visible and thermal spectrum digital cameras and large LCD displays surround a defined space.

Information is extracted from the digital images, rendered in 3-D in a virtual space, and transmitted via the Internet to the separate geographic sites.

Participants at each site can see their own digital clones and their counterparts at the other sites on the LCD screens and can move their bodies in respond to the images on the screen.

For the past two years, players from Illinois’ wheelchair basketball team have been testing and providing feedback on the tele-immersive system, with coach Michael Frogley and his students working on basketball moves and wheelchair maneuvers.

“I really have to praise the wheelchair basketball players,” says Bajcsy. “They are just really fun to work with. They are always interested in trying the new technology, although the technology might be frustrating.”

Bajcsy says the goal is to make the system portable and affordable—less than $50,000 is the target. Systems currently on the market cost at least 10 times that much and focus primarily on head movements.

“If we could build a system so that it works robustly and can be deployed in the gym where the basketball players practice every day, then it would have tremendous value for them,” says Bajcsy.

The research team is also working on a networking and data transmission challenge. A single camera cluster generates about 460 megabytes of data for every second of real-time footage, which is just under half of a gigabyte of information.

But only one gigabyte per second of total bandwidth is available. This poses a problem because a system that employs 10-20 cameras would easily surpass that bandwidth, says Bajcsy.

Before deploying the system, users would need to determine where to place clusters of cameras.

The team has developed a simulation framework that allows users to input their budget, the type of activity they want to learn, the dimensions of the space they want to use, and information about lighting in that space. The framework will determine the number of cameras needed and where they should be positioned.

The team has also been working on adding a replay feature that would allow a user to replay a previous session while in the virtual space. “You can actually exercise next to yourself and say, ‘Oh, I see. I was making that mistake,'” explains Bajcsy.

“As we demonstrate that the system is really working,” he says. “I’m looking for other communities who can take advantage of the technology.”

Engineers from the University of California, Berkeley contributed to the research.

University of Illinois news: