So you didn't watch the video then.
Facepalm*
Here is what he said right off the top:
"What we're doing is using some of the new depth sensing camera technologies to extend the sensing so that it encompasses the entire room. What that allows us to do in LightSpace is all the usual kind of surface interactions on tabletops but then we can also fill in the void, the space between these various surfaces so that we can connect surfaces. So that we can move objects from one surface to another just by tracking the person and understanding the 3D shape of the person and where each surface is placed in the environment."
Right. It uses IR and depth to tell where things are in space and manipulate them in the environment.
LightSpace itself is just a tech demo installation using projectors, but that is simply a means of displaying the data. What they are really showing is how they are using the depth cameras to be able to move that data between surfaces. They could be using any kind of display devices, what they are talking about would still work as they are describing.
I'm curious to know what else they would use to display the data. They could not use any display device as the OPERATING device. The concept is called "LIGHTSPACE". It uses 3d camera and other feedback to define a spatial environment and displays light upon it.
It cannot move a video from the desk to, say, a laptop unless the laptop is in visible angle to the source. The main idea behind this tech is the projector. There is no getting around that.
You didn't even quote the first part of the written summary:
"LightSpace combines elements of surface computing and augmented reality research to create a highly interactive space where any surface, and even the space between surfaces, is fully interactive. Our concept transforms the ideas of surface computing into the new realm of spatial computing."
Surface computing is what exists now where IR and other feedback allows for people to turn a wall into a keyboard and press intangible buttons on it that result in operations. Spatial computing is simply doing this in the space between objects. However, this isn't information floating around. Look at the part where he selects from a menu. His hand lights up a certain color, and as he lowers and raises it, the computer computes his position and changes the item selection, using the projector to display the change.
If you think the point of the installation was a demonstration about the use of projectors in a work environment then you really didn't understand it at all.
It was. daroamer. It was all about the use of projectors and their interaction with 3d camera and such to create an interactive environment IN SPACE. IT SAYS SO IN THE TEX!
I'm going to post it again since you don't seem to be reading it:
Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may “pick up” the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and “drop” the object onto the wall by touching it with their other hand.
These kinds of science experiments/concepts happen all the time at Microsoft Research, it doesn't mean that they are models for upcoming products and in fact many never get used in products at all. It's like a think tank. Sometimes those things are used in products many years later, such as some of the technologies in Kinect.
And now you're defending that it may never get used?









