theprof00 said:
Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may “pick up” the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and “drop” the object onto the wall by touching it with their other hand. It's NOT data that is moving. Data transfer would require wireless network transfer that we already have. Not impressive. |
So you didn't watch the video then.
Here is what he said right off the top:
"What we're doing is using some of the new depth sensing camera technologies to extend the sensing so that it encompasses the entire room. What that allows us to do in LightSpace is all the usual kind of surface interactions on tabletops but then we can also fill in the void, the space between these various surfaces so that we can connect surfaces. So that we can move objects from one surface to another just by tracking the person and understanding the 3D shape of the person and where each surface is placed in the environment."
LightSpace itself is just a tech demo installation using projectors, but that is simply a means of displaying the data. What they are really showing is how they are using the depth cameras to be able to move that data between surfaces. They could be using any kind of display devices, what they are talking about would still work as they are describing.
You didn't even quote the first part of the written summary:
"LightSpace combines elements of surface computing and augmented reality research to create a highly interactive space where any surface, and even the space between surfaces, is fully interactive. Our concept transforms the ideas of surface computing into the new realm of spatial computing."
If you think the point of the installation was a demonstration about the use of projectors in a work environment then you really didn't understand it at all.
These kinds of science experiments/concepts happen all the time at Microsoft Research, it doesn't mean that they are models for upcoming products and in fact many never get used in products at all. It's like a think tank. Sometimes those things are used in products many years later, such as some of the technologies in Kinect.