By using this site, you agree to our Privacy Policy and our Terms of Use. Close
youarebadatgames said:


Having a 3d camera is one thing, figuring out useful things to do with that data is something else.  Gesture interpretation and human interaction with the environment is the novel part of the demonstration.  The value in this is it shows any surface or volume can be interactive, and is much more cost effective than the other options.  It requires no gloves or extra accessories and is much more robust than anything that would only use a regular camera.

It is very much about input and natural human/machine interaction.  You can't even define the concept, so of course you don't think it's useful.  But everyone else can see that having a computer track your every move and react to anything that is controllable is useful.  The projector is just a way to define and show users the interface options and provide feedback, it's not really the real advanced part of it.

I'm never going to convince you it's useful because you simply don't understand it.  So far your responses have proven to me that you don't even have a cursory understanding of the field because you can't tell me what the researchers have done that hasn't been done before.  Let's just say that the consensus from the people that matter is this demonstration shows a lot of interesting research principles with lots of possible applications in information manipulation and dynamic man/machine interaction, and you can continue being ignorant all day.

All you keep saying is that I don't understand it even though I've spelled it out numerous times.

It is a combination hardware/software that allows users to manipulate objects in space using body recognition and visible light to determine input and provide output. The 3d depth camera reads where things are in space, and is able to translate positioning and movement into commands that are programmed into the software. Thereby the movement of a hand in any direction is interpreted as a directional command that can be combined with other commands like 'touching a video in a certain way'/'pick up video' to output the command 'move video from touched area to directed area'.

In many ways it is like a home computer. You have data being manipulated through input such as a mouse or keyboard and the software computes the electrical signals creating strings of commands that combine to form an advanced output, like a paragraph made from hitting many individual keys.

That is one single part of the technology.

The other half is the display. Without the display, none of it is relevant in any way, much like the projector is without the depth camera. They are integral parts of the technology. Without the projector, you have no video, you have no interface, because the interface is made up of a visible spectrum that the user can understand and interact with. On a much smaller scale it can be done without the projector, but such applications are both outside the scope of "lightspace" but potentially useful. In the same way a computer cannot do what I am using it for right now if I do not have a display to show me what I can click on, and read your post.

Now, a computer can still do computations without a display. I can still input data and receive output, but the product would not be called a home computer anymore. When I am without the standard display, I cannot receive the full effects of the computer's capability. Some could potentially run simple programs without needing a display, so the computer is still there, but it is a shadow of what it should be. I can start my computer, navigate to "run" type in an executable filename like notepad and type something out, finishing up by hitting ctrl-s to save the document. 

Similarly, a 3d depth camera can receive input and given a proper setup with the home office I could do other things. Say I could turn on the computer and turn on the lights by walking into the room and have the camera recognize my shape and movements as to only perform the process upon my unique identity recognition. I could then have it input a password by performing a complex movement like opening a drawer, taking a stapler out, and putting it on the desk.

That technology is cool. That technology is useful.

However. That is not what lightspace is.

"Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit"

"LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector"

"Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays."

"For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display."

"Or the user may “pick up” the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and “drop” the object onto the wall by touching it with their other hand."

"What we're doing is using some of the new depth sensing camera technologies to extend the sensing so that it encompasses the entire room.  What that allows us to do in LightSpace is all the usual kind of surface interactions on tabletops but then we can also fill in the void, the space between these various surfaces so that we can connect surfaces.  So that we can move objects from one surface to another just by tracking the person and understanding the 3D shape of the person and where each surface is placed in the environment."

"LightSpace combines elements of surface computing and augmented reality research to create a highly interactive space where any surface, and even the space between surfaces, is fully interactive. Our concept transforms the ideas of surface computing into the new realm of spatial computing."

 

Using light (lightspace) from the projector, they can create an interface in space, as well as receive input interaction by using a depth camera. 

"Recent works have demonstrated using sensing and display technologies to enable interactions directly above the interactive surface [2,10], but these are confined to the physical extent of the display. Virtual and augmented reality techniques can be used to go beyond the confines of the display by putting the user in a fully virtual 3D environment (e.g.,[5]), or a mixture of the real and virtual worlds (e.g., [21]). Unfortunately, to be truly immersive, such approaches typically require cumbersome head mounted displays and worn tracking devices."

"In this paper we introduce LightSpace, an office-sized room instrumented with projectors and recently available depth cameras (Figure 2). LightSpace draws on aspects of interactive displays, augmented reality, and smart rooms. For example, the user may touch to manipulate a virtual object projected on an un-instrumented table, “pick up” the object from the table by moving it with one hand off the table and into the other hand, see the object sitting in their hand as they walk over to an interactive wall display, and place the object on the wall by touching it (Figure 1 a-b).

"In this paper we explore the unique capabilities of depth cameras in combination with projectors to make progress towards a vision in which even the smallest corner of our environment is sensed and functions as a display [25]. With LightSpace we emphasize the following themes:

Surface everywhere: all physical surfaces should be interactive displays (Figure 3).

The room is the computer: not only are physical surfaces interactive, the space between them is active, enabling users to relate to the displays in interesting ways, such as connecting one to another by touching both simultaneously (Figure 1 a-b).

Body as display: graphics may be projected onto the user’s body to enable interactions in mid-air such as holding a virtual object as if it were real (Figure 1 c-d), or making a selection by a menu projected on the hand (Figure 6). Projecting on the body is useful."

http://research.microsoft.com/en-us/um/people/awilson/publications/wilsonuist2010/Wilson UIST 2010 LightSpace.pdf

One of the prime tenets of Lightspace, specifically, is display. 3d depth camera technology is its own technology. Like the mouse to the keyboard. Keyboards were first, mice came later. A mouse is not a computer, but it reads input in a unique way to the keyboard.

A projector is its own technology, like a monitor to a computer. A monitor is not a computer, but it allows for unique interactions with the computer.

Everything about LIGHTSPACE is a COMBINATION of input and output (depth capture facilitated though display, and effect, respectively). running through a central computing unit. 

Saying lightspace is the abstract capability of 3d depth input, is like saying a computer is a keyboard. 3D depth cameras are their own tech, and groups, like Microsoft Research are exploring what they can do IN ASSOCIATION with the input afforded by a 3d depth camera. 

Like I've said a thousand times now, depth camera tech is its own concept. Lightspace is not a 3d depth camera, it is a seperate thing, similarly to how cheese is not milk but comes from alterations to it. 

3D depth technology, as I've also said a thousand times now, is a cool, interesting technology. But it has been done. It's already out there. There are numerous researchers using it and exploring things it does. As proof, the scientist who wrote the article alludes to, and thereby contrasts with, other researchers who are working on 3d input/output in different ways.

"Fails and Olsen [8] argue that many computing actions can be controlled by observing specific user interactions within everyday environments. For example, they propose designating edges of the bed as virtual sliders for controlling lights and providing user feedback through projections. Holman and Vertegaal [12] argue similarly for exploring the use of existing objects in the environment as interactive surfaces, in particular noting that many non-flat or flexible surfaces could become compelling user interfaces."

and here they admit that lightspace is simply someone elses work:

Underkoffler et al. [25] demonstrated that combining projected graphics with real physical objects can enable interactive tabletop experiences such as simulating the casting of shadows by a physical architectural model. This prototype is embedded within a presentation of the larger (unrealized) vision of the “Luminous Room”, where all room surfaces are transformed into interactive displays by multiple “I/O Bulbs”: devices that can simultaneously sense and projectIn many ways, LightSpace is the most complete implementation of the Luminous Room concept to date."

25. Underkoffler, J., Ullmer, B., and Ishii, H. (1999). Emancipated pixels: Real-world graphics in the luminous room. In Proc. of ACM SIGGRAPH ‘99. 385–392.