By using this site, you agree to our Privacy Policy and our Terms of Use. Close
youarebadatgames said:


Wow, I didn't know that when I was talking about kinect, you thought I was talking about the stupid little toy.  I thought that you could figure that I was talking about the software, which is really what kinect is.  Of course they're not going to use it for those applications, it would be a different form factor to meet the needs of that.  That was assumed to be a given.  Did you also think they're were going to hook these up to a bunch of 360s?

Ok, I'll be clearer now.  The "software inside kinect" is the real meat and potatoes.  Said "software inside kinect" makes all kinds of cool things possible, and when you couple it to things like projectors you get surface stuff that could be useful in...say guided surgery (which would work a lot better without having to wear a cumbersome glove) or teleconferenced boardrooms.  Mix the "software inside kinect" with military designs, and you get cool things too that we probably won't hear about for a while.  Mix the "software inside kinect" with depth/mic sensors in every room of a house, and you can have a computer do all kinds of neat things like home automation, appliance interaction, etc.

http://www.haaretz.com/print-edition/business/israeli-startup-primesense-is-microsoft-s-new-partner-for-a-remote-free-world-1.312417

Like I said, for general consumers the approach Primesense and MS is using is great, cause it is cheap and easily scalable.  For more specialized applications, I'm sure the devices and software will be more robust, but a lot of the reliability comes from large scale consumer use and feedback.  Yes, everyone using kinect 1.0 will be a beta tester (that's why it's in games and not a gun), but ver 2.0 and 3.0 of the software will certainly be a step up.  You didn't think they would stay at ver 1.0 forever, did you?

Ok, I need clarification. Are we talking about kinect, or lightspace? Because this started out about lightspace, ie; the video way above, and somehow we're talking about kinect. I want to know how or why  we got here from there.

Now to reply to this post assuming we are talking about kinect. Which I will define as 3d camera and depth sensor.

Guided surgery. In what sense? As in lightspace projecting an overlay of what is good and what is bad onto the patient? If so, confirm so I can tell you why it will never happen.
Teleconferenced boardrooms. Didn't know that those don't exist yet.
Military designs. We already have drones that fly themselves, can identify targets and launch attacks against enemies. The problem with the tech in this aspect is that the things it would be great for require much more than it's capable of. For example, mines, or turrets. It can't tell an enemy from a freindly. It's just depth tracking using a camera. Turrets already exist but aren't used for this very purpose. Mines already have proximity triggers. There is literally nothing kinect can do that isn't already done better in military.
House automation. Already exists...almost as far back as 6 years ago. I get that you think it's good for see who's in a room and stuff, but motion detection already exists as well as plain ol cameras.

Kinect unfortunately, is a hardware and software based configuration. It is software that is upgradeable and capable of being used in many applications, but it is always going to be limited by the hardware using it. This is the case with kinect. There will be no hardware upgrade, and it will not improve by any substantial margin. It is cool for a livingroom tool along with the 360, but it's a small spin on old tech and not much more.

In a gun? Are you serious? What would it do? Move your arm towards the enemy? Shoot on it's own? Really think about the applications and how it would ever take off. Who in their right mind wants a computer deciding where and when to shoot when said computer can't tell friend from enemy?