By using this site, you agree to our Privacy Policy and our Terms of Use. Close
theprof00 said:

And now you're defending that it may never get used?


Alright, this just proves to me you don't really have any clue what this is all about.  Microsoft Research is NOT a product R&D sector of MS.  They create concepts and do science.  SOME of it might get used in product development, much of it won't.

From the Wiki:

Microsoft Research (MSR) is a division of Microsoft created in 1991 for researching various computer science topics and issues.

One of the stated goals of Microsoft Research is to "support long-term computer science research that is not bound by product cycles."

And from an interview with one of the creators of LightSpace:

"At the same time, I think we are really fortunate that not every single thing we work on, not every single demo, is directly targeted at a particular product. That actually gives us the freedom to explore, and create a couple of prototypes and see what works, what doesn't, where we fail, and where we succeed. A great part of our work is the fact that we're academically minded as well. We create these prototypes and demos, test them with users and try them out, and then adapt some of these things to further projects or see them go into products. If everything had to be transferred, I think it would limit us in a big way of how we do our work and how creative we could be with the ideas that we push."

So yeah, LightSpace itself is a concept using projectors to display the data but the projectors themselves are not the big picture here.  The big picture is creating an augmented reality space where anything can be used for information and the user is using his body to interact with anything in the room and transfer "data" (and I'm not talking about w-fi or bluetooth or whatever you thought I was talking about) across different surfaces via a natural body interface.  If you actually think about it and look past the superficial construct of the room itself, you'd see that it's the interface itself that is the real world application here, and it's hardly "useless technology" as you said originally.

Here is another quote from an interview about LightSpace:

"We do this by combining several projectors and several projectors above the user's head, kind of coming from the ceiling. We use them as smart projectors, to make different surfaces in the environment interactive. The table and the wall that you saw in the video, they are nothing more than a piece of foam core and a standard-issue desk. But by having this extra technology we're actually simulating interactivity directly on top of that. It allows the user to literally go and touch them. It behaves almost the same way it would as a Microsoft Surface table. But rather than simulating interactivity just on the surfaces, we are really really interested in what kind of capabilities you can do between the interactive surfaces. How can you use your body, for example, to transition and transfer some of the objects between the surfaces. Pick an object up and literally hold it, look at it and carry it over to somewhere else. Or this concept of a spacial menu. Can you have spacial widgets, can you have a type of interaction being in a particular space and resulting in a particular action."

I'm going to drop this now as it's clear we're not on the same page here.