By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft - Microsoft R&D Burns $65 Million In Three Months On Something Unknown

So you didn't watch the video then.

Facepalm*

Here is what he said right off the top:

"What we're doing is using some of the new depth sensing camera technologies to extend the sensing so that it encompasses the entire room.  What that allows us to do in LightSpace is all the usual kind of surface interactions on tabletops but then we can also fill in the void, the space between these various surfaces so that we can connect surfaces.  So that we can move objects from one surface to another just by tracking the person and understanding the 3D shape of the person and where each surface is placed in the environment."

Right. It uses IR and depth to tell where things are in space and manipulate them in the environment.

LightSpace itself is just a tech demo installation using projectors, but that is simply a means of displaying the data.  What they are really showing is how they are using the depth cameras to be able to move that data between surfaces.  They could be using any kind of display devices, what they are talking about would still work as they are describing.

I'm curious to know what else they would use to display the data. They could not use any display device as the OPERATING device. The concept is called "LIGHTSPACE". It uses 3d camera and other feedback to define a spatial environment and displays light upon it.

It cannot move a video from the desk to, say, a laptop unless the laptop is in visible angle to the source. The main idea behind this tech is the projector. There is no getting around that.

You didn't even quote the first part of the written summary:

"LightSpace combines elements of surface computing and augmented reality research to create a highly interactive space where any surface, and even the space between surfaces, is fully interactive. Our concept transforms the ideas of surface computing into the new realm of spatial computing."

Surface computing is what exists now where IR and other feedback allows for people to turn a wall into a keyboard and press intangible buttons on it that result in operations. Spatial computing is simply doing this in the space between objects. However, this isn't information floating around. Look at the part where he selects from a menu. His hand lights up a certain color, and as he lowers and raises it, the computer computes his position and changes the item selection, using the projector to display the change.

If you think the point of the installation was a demonstration about the use of projectors in a work environment then you really didn't understand it at all. 

It was. daroamer. It was all about the use of projectors and their interaction with 3d camera and such to create an interactive environment IN SPACE. IT SAYS SO IN THE TEX!

I'm going to post it again since you don't seem to be reading it:

Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may “pick up” the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and “drop” the object onto the wall by touching it with their other hand.

These kinds of science experiments/concepts happen all the time at Microsoft Research, it doesn't mean that they are models for upcoming products and in fact many never get used in products at all.  It's like a think tank.  Sometimes those things are used in products many years later, such as some of the technologies in Kinect.

And now you're defending that it may never get used?



Around the Network
theprof00 said:

And now you're defending that it may never get used?


Alright, this just proves to me you don't really have any clue what this is all about.  Microsoft Research is NOT a product R&D sector of MS.  They create concepts and do science.  SOME of it might get used in product development, much of it won't.

From the Wiki:

Microsoft Research (MSR) is a division of Microsoft created in 1991 for researching various computer science topics and issues.

One of the stated goals of Microsoft Research is to "support long-term computer science research that is not bound by product cycles."

And from an interview with one of the creators of LightSpace:

"At the same time, I think we are really fortunate that not every single thing we work on, not every single demo, is directly targeted at a particular product. That actually gives us the freedom to explore, and create a couple of prototypes and see what works, what doesn't, where we fail, and where we succeed. A great part of our work is the fact that we're academically minded as well. We create these prototypes and demos, test them with users and try them out, and then adapt some of these things to further projects or see them go into products. If everything had to be transferred, I think it would limit us in a big way of how we do our work and how creative we could be with the ideas that we push."

So yeah, LightSpace itself is a concept using projectors to display the data but the projectors themselves are not the big picture here.  The big picture is creating an augmented reality space where anything can be used for information and the user is using his body to interact with anything in the room and transfer "data" (and I'm not talking about w-fi or bluetooth or whatever you thought I was talking about) across different surfaces via a natural body interface.  If you actually think about it and look past the superficial construct of the room itself, you'd see that it's the interface itself that is the real world application here, and it's hardly "useless technology" as you said originally.

Here is another quote from an interview about LightSpace:

"We do this by combining several projectors and several projectors above the user's head, kind of coming from the ceiling. We use them as smart projectors, to make different surfaces in the environment interactive. The table and the wall that you saw in the video, they are nothing more than a piece of foam core and a standard-issue desk. But by having this extra technology we're actually simulating interactivity directly on top of that. It allows the user to literally go and touch them. It behaves almost the same way it would as a Microsoft Surface table. But rather than simulating interactivity just on the surfaces, we are really really interested in what kind of capabilities you can do between the interactive surfaces. How can you use your body, for example, to transition and transfer some of the objects between the surfaces. Pick an object up and literally hold it, look at it and carry it over to somewhere else. Or this concept of a spacial menu. Can you have spacial widgets, can you have a type of interaction being in a particular space and resulting in a particular action."

I'm going to drop this now as it's clear we're not on the same page here.



like talking to a wall.

Your last quote and first one entirely justify what I've been saying.

 

fyi, when he says "between the interactive surfaces", he's talking about the projected interface menu using the *projector*.

And your first quote basically says, 'it doesn't matter if it doesn't have any marketable application, it's just experimenting".

Theprof00: What a useless tech

daroamer: How is that useless

Quote: It may not be useful, but it's just mucking about anyway. We have the freedom to do that.



Could it possibly be the reports/rumours that MS is seriously considering making it's own tablet device? The xPad perhaps?



“The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt.” - Bertrand Russell

"When the power of love overcomes the love of power, the world will know peace."

Jimi Hendrix

 

theprof00 said:

like talking to a wall.

Your last quote and first one entirely justify what I've been saying.

 

fyi, when he says "between the interactive surfaces", he's talking about the projected interface menu using the *projector*.

And your first quote basically says, 'it doesn't matter if it doesn't have any marketable application, it's just experimenting".

Theprof00: What a useless tech

daroamer: How is that useless

Quote: It may not be useful, but it's just mucking about anyway. We have the freedom to do that.

I see you are still as clueless as ever.  If you are going to continue to trot out your ignorance of the technology and the research, you should at least try to read and comprehend what the researchers have done and how it is different from the other competing products.

You keep saying the technology behind this is useless, but you can't even explain what it is.  This is hilarious.

We keep telling you it's not the projector, but you can't seem to figure it out.  The papers and articles are out there, but I suspect you have no interest or capability in really pursuing it.  Just quit now before you keep embarassing yourself and leave the science to the people who understand it.



Around the Network
youarebadatgames said:
theprof00 said:

like talking to a wall.

Your last quote and first one entirely justify what I've been saying.

 

fyi, when he says "between the interactive surfaces", he's talking about the projected interface menu using the *projector*.

And your first quote basically says, 'it doesn't matter if it doesn't have any marketable application, it's just experimenting".

Theprof00: What a useless tech

daroamer: How is that useless

Quote: It may not be useful, but it's just mucking about anyway. We have the freedom to do that.

I see you are still as clueless as ever.  If you are going to continue to trot out your ignorance of the technology and the research, you should at least try to read and comprehend what the researchers have done and how it is different from the other competing products.

You keep saying the technology behind this is useless, but you can't even explain what it is.  This is hilarious.

We keep telling you it's not the projector, but you can't seem to figure it out.  The papers and articles are out there, but I suspect you have no interest or capability in really pursuing it.  Just quit now before you keep embarassing yourself and leave the science to the people who understand it.

"We do this by combining several projectors and several projectors above the user's head, kind of coming from the ceiling. We use them as smart projectors, to make different surfaces in the environment interactive."



theprof00 said:

"We do this by combining several projectors and several projectors above the user's head, kind of coming from the ceiling. We use them as smart projectors, to make different surfaces in the environment interactive."


That's good, you can read.  Now, why do you have to use a depth camera and why can't you just use a projector with a webcam instead?



theprof00: "Right. It uses IR and depth to tell where things are in space and manipulate them in the environment."




do you have a point coming about how it's not about the projector?

"We keep telling you it's not the projector"



Now, I'm off to bed.

I'd like you to give me an example of something it can do. Your gun or sentry turret for example. And tell me how this tech demo plays a part in it. Be specific.