By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - Microsoft R&D Burns $65 Million In Three Months On Something Unknown

theprof00 said:

Last link

http://singularityhub.com/2010/02/17/minority-report-interface-is-real-hitting-mainstream-soon-video/


Do you even bother read what you're furiously googling up?  The interface requires gloves, but besides that the how is important, because the way each company is going about it has different strengths, weaknesses, and challenges to overcome.  You'd know that if you did any sort of due diligence before dismissing the previous research simply because it came from your hated company MS.

See, MS went with Primesense's tech instead of 3DV because it's cheap.  We already see that because they're mass producing kinect.  The interesting applications will be in the specialized industries - military, medical, and even education.  I was just thinking about Primesense being Isreali and on a hunch it found that the company is headed by a former military analyst.  The implications of his work are obvious - automated security, sentry guns, deployable sensors - all likely due to the needs of Isreali security.  Kinect is just the cheap civilian version of the tech, but as time goes on it will be refined.  I suspect the US military is already looking at this to see where it will go.



Around the Network
Untamoi said:


I think that technology is meant to be used in seminars and briefings and similar events. For example when 15 people are around the table planning something. In that case, your examples don't do anything similar. It is not meant for regular users.

Heck, most money for Microsoft comes from businesses, not from regular users. Home PC's (including every software by MS) and consoles are only a small part of their revenue.

Yes this is true.

It does seem much more likely to be in a board room.

However, I don't see how it's better than current tech. Picture a board room with interactive whiteboards (projector video on an surface with interactive IR), and one using that tech.

I see everyone has a screen in front of them and tehy take turns giving their idea. For the MS one, you pick up your video, throw it at a wall and there it is. Someone wants to see a copy of your video, you slide it to them on the table.

How is that any different, or better, than simply having an interactive button that puts it on the wall or puts it where it's supposed to go? It's like, they went out of their way to make the tech look more like minority report.



youarebadatgames said:
theprof00 said:

Last link

http://singularityhub.com/2010/02/17/minority-report-interface-is-real-hitting-mainstream-soon-video/


Do you even bother read what you're furiously googling up?  The interface requires gloves, but besides that the how is important, because the way each company is going about it has different strengths, weaknesses, and challenges to overcome.  You'd know that if you did any sort of due diligence before dismissing the previous research simply because it came from your hated company MS.

See, MS went with Primesense's tech instead of 3DV because it's cheap.  We already see that because they're mass producing kinect.  The interesting applications will be in the specialized industries - military, medical, and even education.  I was just thinking about Primesense being Isreali and on a hunch it found that the company is headed by a former military analyst.  The implications of his work are obvious - automated security, sentry guns, deployable sensors - all likely due to the needs of Isreali security.  Kinect is just the cheap civilian version of the tech, but as time goes on it will be refined.  I suspect the US military is already looking at this to see where it will go.

First of all can you please leave your emotions at the door. I'm not furiously googling anything. I'm trying to help you see how the tech exists.

Obviously the video is different from the kinect one. It's a thousand times better. I'm trying to give you a sense of scope here. Kinect will not be in military medical...possibly education on a small scale. It's consumer tech. Medical and military require utmost reliability and precision, cost is not the issue.

Automated turrets already exist. It's for the sake of precision that they are not used, and IR is not the identifyer of choice for that.

Sadly, Kinect is not going to be used in turrets regardless of how hilarious it might be.

PS: "simply because it came from your hated company MS." C'mon man, you were so close to understanding... If you look at it objectively, you know I'm right. It's not an MS thing, it's not a Kinect thing, it's just not a very good tech. Do NOT attribute real criticism to hate, that only serves to hide the truth.



theprof00 said:

First of all can you please leave your emotions at the door. I'm not furiously googling anything. I'm trying to help you see how the tech exists.

Obviously the video is different from the kinect one. It's a thousand times better. I'm trying to give you a sense of scope here. Kinect will not be in military medical...possibly education on a small scale. It's consumer tech. Medical and military require utmost reliability and precision, cost is not the issue.

Automated turrets already exist. It's for the sake of precision that they are not used, and IR is not the identifyer of choice for that.

Sadly, Kinect is not going to be used in turrets regardless of how hilarious it might be.

PS: "simply because it came from your hated company MS." C'mon man, you were so close to understanding... If you look at it objectively, you know I'm right. It's not an MS thing, it's not a Kinect thing, it's just not a very good tech. Do NOT attribute real criticism to hate, that only serves to hide the truth.


Wow, I didn't know that when I was talking about kinect, you thought I was talking about the stupid little toy.  I thought that you could figure that I was talking about the software, which is really what kinect is.  Of course they're not going to use it for those applications, it would be a different form factor to meet the needs of that.  That was assumed to be a given.  Did you also think they're were going to hook these up to a bunch of 360s?

Ok, I'll be clearer now.  The "software inside kinect" is the real meat and potatoes.  Said "software inside kinect" makes all kinds of cool things possible, and when you couple it to things like projectors you get surface stuff that could be useful in...say guided surgery (which would work a lot better without having to wear a cumbersome glove) or teleconferenced boardrooms.  Mix the "software inside kinect" with military designs, and you get cool things too that we probably won't hear about for a while.  Mix the "software inside kinect" with depth/mic sensors in every room of a house, and you can have a computer do all kinds of neat things like home automation, appliance interaction, etc.

http://www.haaretz.com/print-edition/business/israeli-startup-primesense-is-microsoft-s-new-partner-for-a-remote-free-world-1.312417

Like I said, for general consumers the approach Primesense and MS is using is great, cause it is cheap and easily scalable.  For more specialized applications, I'm sure the devices and software will be more robust, but a lot of the reliability comes from large scale consumer use and feedback.  Yes, everyone using kinect 1.0 will be a beta tester (that's why it's in games and not a gun), but ver 2.0 and 3.0 of the software will certainly be a step up.  You didn't think they would stay at ver 1.0 forever, did you?



theprof00 said:
daroamer said:
theprof00 said:
ssj12 said:

Its probably Microsoft LightSpace

http://research.microsoft.com/apps/video/default.aspx?id=139046

that was the dumbest thing ive ever seen.

"you pull out your hand and it shows you directions, or an arrow appears on the floor"

"I can pass the video to him, in his hand, and he can drop it on the paper and play around with it"

What a useless technology.

How was that useless?  These are tech demos, you can think of all kinds of cool applications for this.

A good example is in the movie Avatar when they first put Jake in the avatar link pod and the doctor is looking at the scan of his brain on a big monitor, then with just the swipe of his hand he transfers that to a tablet like device so he can walk with it.

Think of if you're reading the daily paper on your computer and you have to go to the bathroom and you'd like to finish reading, you pick up your future iPad and just swipe from your monitor to the iPad and boom, there it is instantly.

no, this uses a projector to do the tech. You would need a projector in every room at every angle to even start producing the kinds of effects you're talking about. It's just not feasible.

You're really confusing what is happening in that demo for what you wish it did. It's not putting the file onto something else, it's projecting it onto something else. Did you see that projector they were using? That's a really really expensive set up. That's like, comemrcially priced equipment. You're talking tens of thousands, just for that one room, for SOME angles.

However, what you're talking about is easily done with different methods. In Japan, cell phones transfer files and information using IR or bluetooth or some such technology. All you have to do is equip an iPad with that and similarly with the monitor. Boom, instant file transmission. No need to even swipe your hand.

As far as the newspaper idea. There are already several foldable screen techs in the works. Pocket sized pads that fold out to standard paper size.

It's a silly tech, at best, at worst, it's an already outdated piece of technology. Outdated in the sense that there are already cheap alternatives to doing the same thing, or capable of doing similar things.

The tech wasn't about the projectors, it was about using the depth cameras to create natural human user interfaces.

The way the information was displayed within the room isn't what's relevant.  That table he was using to move the pictures around could just as easily been a regular monitor, the point was you didn't need the monitor to be touch sensitive, ala Microsoft Surface, because the interaction was being controlled by the depth cameras.  The point is ANY surface can become a touch user interface of sorts.

Similarly when he swipes the video onto his hand it can swiped onto anything he is holding like a phone....what is generating the ball of light on his hand is IRRELEVANT.  It's the motions being interpretted by the depth cameras that cause the data to move from surface to surface (table, body, screen) that is really the important thing here.

Go watch the video again.



Around the Network
daroamer said:

The tech wasn't about the projectors, it was about using the depth cameras to create natural human user interfaces.

The way the information was displayed within the room isn't what's relevant.  That table he was using to move the pictures around could just as easily been a regular monitor, the point was you didn't need the monitor to be touch sensitive, ala Microsoft Surface, because the interaction was being controlled by the depth cameras.  The point is ANY surface can become a touch user interface of sorts.

Similarly when he swipes the video onto his hand it can swiped onto anything he is holding like a phone....what is generating the ball of light on his hand is IRRELEVANT.  It's the motions being interpretted by the depth cameras that cause the data to move from surface to surface (table, body, screen) that is really the important thing here.

Go watch the video again.

Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may “pick up” the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and “drop” the object onto the wall by touching it with their other hand.

It's NOT data that is moving. Data transfer would require wireless network transfer that we already have. Not impressive.



The xbox 720 baby!!!!!!!!!!!! lol



youarebadatgames said:


Wow, I didn't know that when I was talking about kinect, you thought I was talking about the stupid little toy.  I thought that you could figure that I was talking about the software, which is really what kinect is.  Of course they're not going to use it for those applications, it would be a different form factor to meet the needs of that.  That was assumed to be a given.  Did you also think they're were going to hook these up to a bunch of 360s?

Ok, I'll be clearer now.  The "software inside kinect" is the real meat and potatoes.  Said "software inside kinect" makes all kinds of cool things possible, and when you couple it to things like projectors you get surface stuff that could be useful in...say guided surgery (which would work a lot better without having to wear a cumbersome glove) or teleconferenced boardrooms.  Mix the "software inside kinect" with military designs, and you get cool things too that we probably won't hear about for a while.  Mix the "software inside kinect" with depth/mic sensors in every room of a house, and you can have a computer do all kinds of neat things like home automation, appliance interaction, etc.

http://www.haaretz.com/print-edition/business/israeli-startup-primesense-is-microsoft-s-new-partner-for-a-remote-free-world-1.312417

Like I said, for general consumers the approach Primesense and MS is using is great, cause it is cheap and easily scalable.  For more specialized applications, I'm sure the devices and software will be more robust, but a lot of the reliability comes from large scale consumer use and feedback.  Yes, everyone using kinect 1.0 will be a beta tester (that's why it's in games and not a gun), but ver 2.0 and 3.0 of the software will certainly be a step up.  You didn't think they would stay at ver 1.0 forever, did you?

Ok, I need clarification. Are we talking about kinect, or lightspace? Because this started out about lightspace, ie; the video way above, and somehow we're talking about kinect. I want to know how or why  we got here from there.

Now to reply to this post assuming we are talking about kinect. Which I will define as 3d camera and depth sensor.

Guided surgery. In what sense? As in lightspace projecting an overlay of what is good and what is bad onto the patient? If so, confirm so I can tell you why it will never happen.
Teleconferenced boardrooms. Didn't know that those don't exist yet.
Military designs. We already have drones that fly themselves, can identify targets and launch attacks against enemies. The problem with the tech in this aspect is that the things it would be great for require much more than it's capable of. For example, mines, or turrets. It can't tell an enemy from a freindly. It's just depth tracking using a camera. Turrets already exist but aren't used for this very purpose. Mines already have proximity triggers. There is literally nothing kinect can do that isn't already done better in military.
House automation. Already exists...almost as far back as 6 years ago. I get that you think it's good for see who's in a room and stuff, but motion detection already exists as well as plain ol cameras.

Kinect unfortunately, is a hardware and software based configuration. It is software that is upgradeable and capable of being used in many applications, but it is always going to be limited by the hardware using it. This is the case with kinect. There will be no hardware upgrade, and it will not improve by any substantial margin. It is cool for a livingroom tool along with the 360, but it's a small spin on old tech and not much more.

In a gun? Are you serious? What would it do? Move your arm towards the enemy? Shoot on it's own? Really think about the applications and how it would ever take off. Who in their right mind wants a computer deciding where and when to shoot when said computer can't tell friend from enemy?



theprof00 said:
daroamer said:

The tech wasn't about the projectors, it was about using the depth cameras to create natural human user interfaces.

The way the information was displayed within the room isn't what's relevant.  That table he was using to move the pictures around could just as easily been a regular monitor, the point was you didn't need the monitor to be touch sensitive, ala Microsoft Surface, because the interaction was being controlled by the depth cameras.  The point is ANY surface can become a touch user interface of sorts.

Similarly when he swipes the video onto his hand it can swiped onto anything he is holding like a phone....what is generating the ball of light on his hand is IRRELEVANT.  It's the motions being interpretted by the depth cameras that cause the data to move from surface to surface (table, body, screen) that is really the important thing here.

Go watch the video again.

Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may “pick up” the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and “drop” the object onto the wall by touching it with their other hand.

It's NOT data that is moving. Data transfer would require wireless network transfer that we already have. Not impressive.

So you didn't watch the video then.

Here is what he said right off the top:

"What we're doing is using some of the new depth sensing camera technologies to extend the sensing so that it encompasses the entire room.  What that allows us to do in LightSpace is all the usual kind of surface interactions on tabletops but then we can also fill in the void, the space between these various surfaces so that we can connect surfaces.  So that we can move objects from one surface to another just by tracking the person and understanding the 3D shape of the person and where each surface is placed in the environment."

LightSpace itself is just a tech demo installation using projectors, but that is simply a means of displaying the data.  What they are really showing is how they are using the depth cameras to be able to move that data between surfaces.  They could be using any kind of display devices, what they are talking about would still work as they are describing.

You didn't even quote the first part of the written summary:

"LightSpace combines elements of surface computing and augmented reality research to create a highly interactive space where any surface, and even the space between surfaces, is fully interactive. Our concept transforms the ideas of surface computing into the new realm of spatial computing."

If you think the point of the installation was a demonstration about the use of projectors in a work environment then you really didn't understand it at all. 

These kinds of science experiments/concepts happen all the time at Microsoft Research, it doesn't mean that they are models for upcoming products and in fact many never get used in products at all.  It's like a think tank.  Sometimes those things are used in products many years later, such as some of the technologies in Kinect.



editted becuase too fresh