By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft - Lee on Xbox Natal

Johnny Chung Lee is one of the world’s top experts on gesture interfaces in video games. You will find his videos on YouTube where he exploits the Wii gesture interface to the limit. These have made him a bit of a cult.

Now he is a researcher in the Applied Sciences group at Microsoft working on Natal. And he has a blog, on which he reveals all sorts of great inside information. Here is an extract:

Speaking as someone who has been working in interface and sensing technology for nearly 10 years, this is an astonishing combination of hardware and software. The few times I’ve been able to show researchers the underlying components, their jaws drop with amazement… and with good reason.

The 3D sensor itself is a pretty incredible piece of equipment providing detailed 3D information about the environment similar to very expensive laser range finding systems but at a tiny fraction of the cost. Depth cameras provide you with a point cloud of the surface of objects that is fairly insensitive to various lighting conditions allowing you to do things that are simply impossible with a normal camera.

But once you have the 3D information, you then have to interpret that cloud of points as “people”. This is where the researcher jaws stay dropped. The human tracking algorithms that the teams have developed are well ahead of the state of the art in computer vision in this domain. The sophistication and performance of the algorithms rival or exceed anything that I’ve seen in academic research, never mind a consumer product. At times, working on this project has felt like a miniature “Manhattan project” with developers and researchers from around the world coming together to make this happen.

We would all love to one day have our own personal holodeck. This is a pretty measurable step in that direction.

I can’t wait to get my hands on this!



Around the Network

And I can't wait to get my hands on Johnny Lee. Rowr! What a dreamboat!



I hope they sell proper room lighting with the system. I believe it may be necessary to illuminate your livingroom much like the E3 setup for the system to work properly.

Sadly, I'm not joking.



 

i saw this a while back and almost craped myself. imagine what you could do with this and a huge TV.! its amazing how he did this, i wanna learn how!!!!(im talkin about the wii vid)

im also a bit disappointed at johnny for going to MS. i think he could have done better with nintendo. just my opinion.



                                                                                                  
Procrastinato said:
I hope they sell proper room lighting with the system. I believe it may be necessary to illuminate your livingroom much like the E3 setup for the system to work properly.

Sadly, I'm not joking.

Actually, Kotaku and others have said that they tried it out in some dimly lit hotel room settings. Word is that it doesn't require much lighting, if any.



Around the Network
badgenome said:
Procrastinato said:
I hope they sell proper room lighting with the system. I believe it may be necessary to illuminate your livingroom much like the E3 setup for the system to work properly.

Sadly, I'm not joking.

Actually, Kotaku and others have said that they tried it out in some dimly lit hotel room settings. Word is that it doesn't require much lighting, if any.

The Natal device itself would have to paint the users without lighting, and from a lot of different angles too.  Lee's own blog states that the system comes up with a 3D image, and its not going to come up with such a thing without a lot of thorough data input, via light or sound.  I'm kinda suspicious of these reports.

We'll see I guess.  I have a strong feeling that some sort of marker device (like Sony's wand, or a Wiimote, possibly in wearable form) will be necessary in the end product, for the system to truly work decently enough to be viable.



 

I think he's being overly positive considering the failure of the E3 demo:

The human tracking algorithms that the teams have developed are well ahead of the state of the art in computer vision in this domain. The sophistication and performance of the algorithms rival or exceed anything that I’ve seen in academic research, never mind a consumer product. At times, working on this project has felt like a miniature “Manhattan project” with developers and researchers from around the world coming together to make this happen.


We all saw how easy it was to get their algorithms to mix up the arms and legs. It's like they aren't even taking advantage of the limitations of human anatomy in their tracking?

What I mean is... a well made model of the human skeleton would never consider the kind of movement we saw as a possibility. 

 



My Mario Kart Wii friend code: 2707-1866-0957

Procrastinato said:
I hope they sell proper room lighting with the system. I believe it may be necessary to illuminate your livingroom much like the E3 setup for the system to work properly.

Sadly, I'm not joking.

It has an IR camera.



Past Avatar picture!!!

Don't forget your helmet there, Master Chief!

NJ5 said:

I think he's being overly positive considering the failure of the E3 demo:

The human tracking algorithms that the teams have developed are well ahead of the state of the art in computer vision in this domain. The sophistication and performance of the algorithms rival or exceed anything that I’ve seen in academic research, never mind a consumer product. At times, working on this project has felt like a miniature “Manhattan project” with developers and researchers from around the world coming together to make this happen.


We all saw how easy it was to get their algorithms to mix up the arms and legs. It's like they aren't even taking advantage of the limitations of human anatomy in their tracking?

What I mean is... a well made model of the human skeleton would never consider the kind of movement we saw as a possibility. 

 

It is a work in progress dude.  Software can have glitches.  Especially software that isn't completely finished.  Imagine a world where software was perfect the first time around.



JaggedSac said:
NJ5 said:

I think he's being overly positive considering the failure of the E3 demo:

The human tracking algorithms that the teams have developed are well ahead of the state of the art in computer vision in this domain. The sophistication and performance of the algorithms rival or exceed anything that I’ve seen in academic research, never mind a consumer product. At times, working on this project has felt like a miniature “Manhattan project” with developers and researchers from around the world coming together to make this happen.


We all saw how easy it was to get their algorithms to mix up the arms and legs. It's like they aren't even taking advantage of the limitations of human anatomy in their tracking?

What I mean is... a well made model of the human skeleton would never consider the kind of movement we saw as a possibility. 

 

It is a work in progress dude.  Software can have glitches.  Especially software that isn't completely finished.  Imagine a world where software was perfect the first time around.

 

No point correcting damage control.  The more open minded thinkers would simply conclude they haven't added error correction on the animation of the avatar to prevent joints bending and twisting wrong not that the hardware was incapable of doing the job.  Sadly I fear we'll hear nothing but these negative posts until it's released.