By using this site, you agree to our Privacy Policy and our Terms of Use. Close
theprof00 said:
daroamer said:

The tech wasn't about the projectors, it was about using the depth cameras to create natural human user interfaces.

The way the information was displayed within the room isn't what's relevant.  That table he was using to move the pictures around could just as easily been a regular monitor, the point was you didn't need the monitor to be touch sensitive, ala Microsoft Surface, because the interaction was being controlled by the depth cameras.  The point is ANY surface can become a touch user interface of sorts.

Similarly when he swipes the video onto his hand it can swiped onto anything he is holding like a phone....what is generating the ball of light on his hand is IRRELEVANT.  It's the motions being interpretted by the depth cameras that cause the data to move from surface to surface (table, body, screen) that is really the important thing here.

Go watch the video again.

Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may “pick up” the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and “drop” the object onto the wall by touching it with their other hand.

It's NOT data that is moving. Data transfer would require wireless network transfer that we already have. Not impressive.

So you didn't watch the video then.

Here is what he said right off the top:

"What we're doing is using some of the new depth sensing camera technologies to extend the sensing so that it encompasses the entire room.  What that allows us to do in LightSpace is all the usual kind of surface interactions on tabletops but then we can also fill in the void, the space between these various surfaces so that we can connect surfaces.  So that we can move objects from one surface to another just by tracking the person and understanding the 3D shape of the person and where each surface is placed in the environment."

LightSpace itself is just a tech demo installation using projectors, but that is simply a means of displaying the data.  What they are really showing is how they are using the depth cameras to be able to move that data between surfaces.  They could be using any kind of display devices, what they are talking about would still work as they are describing.

You didn't even quote the first part of the written summary:

"LightSpace combines elements of surface computing and augmented reality research to create a highly interactive space where any surface, and even the space between surfaces, is fully interactive. Our concept transforms the ideas of surface computing into the new realm of spatial computing."

If you think the point of the installation was a demonstration about the use of projectors in a work environment then you really didn't understand it at all. 

These kinds of science experiments/concepts happen all the time at Microsoft Research, it doesn't mean that they are models for upcoming products and in fact many never get used in products at all.  It's like a think tank.  Sometimes those things are used in products many years later, such as some of the technologies in Kinect.