By using this site, you agree to our Privacy Policy and our Terms of Use. Close
youarebadatgames said:
theprof00 said:

Now, I'm off to bed.

I'd like you to give me an example of something it can do. Your gun or sentry turret for example. And tell me how this tech demo plays a part in it. Be specific.


If you can't understand the implications of being able to capturing 3D data and making it available for a computer, why would you be able to understand anything else?

How does a computer identify an object in a picture?  Humans can contextualize vision easily because we understand through experience attributes like weight, color, size, shape, etc.  A computer cannot derive 3D spatial information from an RGB image because 1) the database to match the objects in it would be incomprehensively large and 2) the myriad of lighting, position, and other confounding factors makes it impossible.  If you can't contextualize a scene, you can't make logical decisions with it.

With the 3D camera, you can get 3D information so a computer can perform simple operations, like what the shape of a room is.  From there you can do pretty much anything, like looking for how humans interact with objects and reacting accordingly.

This demo illustrates this principle by using the depth camera to make any object it sees interactive - the table, shelves, the air, and people - and to have a computer react to it.  The projection is just a user interface convenience, the real technology is interpreting the data from the depth camera into something a computer can understand.

That's why this technology is exciting.  It's not quite like giving a computer human eyes, but it is starting to get there.  A computer will be able to tell things like whether or not the person in the room is friendly (someone that has been identified before - like how Kinect signs people in by body shape recognition), what the layout of a room is like, whether or not there are people around, etc.  A lot of things that a person can usually deduce from sight a computer will be able to as well.

A sentry gun is just an application - having a depth camera on or near it will be able to identify friend or foe, and the computer can make logical decisions like tracking and targeting based on that.  The key feature of depth cameras is adding 3D contextual data to an image which allows a computer to logically interpret and react.

you've just described a depth camera and its potential usefulness.

Not lightspace. Lightspace is the tech demo using 3d depth cameras ALONG WITH projectors, to "CREATE AN INTERACTIVE ENVIRONMENT". It's about creating, not about input, and it's not simply about 3d cameras which we've had for years now.

If all you wanted to do was say 3d cameras are exciting then fair enough.

If you wanted to prove to me how lightspace was a useful concept, you have not done so.