Oh I see sorry mate you made a point literally.
Oh I see sorry mate you made a point literally.
Wow.. and I thought MTV is a music TV thing, apparently they have turned into hardware experts now...
But ok, that thing is so utter bullshit, let me just mention a few things:
1. There is an MTV guy who uses a stopwatch to measure short time delays. The brain to finger delay (the time between your eyes trigger something and the time until your finger reacts by pressing the stopwach) is anywhere between 80ms and 200ms (depending how well trained and fast and old and ... you are). This nullifies any idea that the guy was able to accurately time "between 80ms and 120ms". Simply, physiologically not possible. The sad truth is that the guy simply adopted his clicking behaviour to what he saw on screen (difficult to explain the physiology involved, but you may get the point).
2. In the video, there was only one short sequence where the person and the avatar were in the same video pictures. The lag was clearly visible around 0.2-0.3s. This is no surprise as the Natal system has to compare several subsequent frames to determine direction of body movement. At 30Hz, the data acquisition time is already 66-99ms (2-3 frames). Add some overhead for data processing in Natal, sending data to the Box, and data processing in the Box, and you are over 100ms for pure hardware delay.
3. Where is the demo that shows several people playing a game, with speech recognition? Where is the demo that actually shows a real game? Throwing arms and legs towards red balls may be amusing (for a while), but it is no surprise we've only see this demo so far, as it allows to "cover" lag issues nicely..
| drkohler said: Wow.. and I thought MTV is a music TV thing, apparently they have turned into hardware experts now... But ok, that thing is so utter bullshit, let me just mention a few things: 1. There is an MTV guy who uses a stopwatch to measure short time delays. The brain to finger delay (the time between your eyes trigger something and the time until your finger reacts by pressing the stopwach) is anywhere between 80ms and 200ms (depending how well trained and fast and old and ... you are). This nullifies any idea that the guy was able to accurately time "between 80ms and 120ms". Simply, physiologically not possible. The sad truth is that the guy simply adopted his clicking behaviour to what he saw on screen (difficult to explain the physiology involved, but you may get the point). 2. In the video, there was only one short sequence where the person and the avatar were in the same video pictures. The lag was clearly visible around 0.2-0.3s. This is no surprise as the Natal system has to compare several subsequent frames to determine direction of body movement. At 30Hz, the data acquisition time is already 66-99ms (2-3 frames). Add some overhead for data processing in Natal, sending data to the Box, and data processing in the Box, and you are over 100ms for pure hardware delay. 3. Where is the demo that shows several people playing a game, with speech recognition? Where is the demo that actually shows a real game? Throwing arms and legs towards red balls may be amusing (for a while), but it is no surprise we've only see this demo so far, as it allows to "cover" lag issues nicely.. |
Why would they need to track motion? The avatar skeleton is directly mapped to the player skeleton. Are you saying the neural network needs a series of frames to determine a skeletal system? Collision detection is all that is necessary past that. Or am I missing something?
| drkohler said: 2. In the video, there was only one short sequence where the person and the avatar were in the same video pictures. The lag was clearly visible around 0.2-0.3s. This is no surprise as the Natal system has to compare several subsequent frames to determine direction of body movement. At 30Hz, the data acquisition time is already 66-99ms (2-3 frames). Add some overhead for data processing in Natal, sending data to the Box, and data processing in the Box, and you are over 100ms for pure hardware delay.
|
Also, I was wanting to get some hard numbers on the flow. And correct me if I am wrong on anything.
Image capture = 33.33ms (30hz camera)
Any data processing in Natal? = ??ms
USB to XBox = ~20ms from what I have read on USB latency.
XBox Processing to get skeletel system = ~10ms (http://www.newscientist.com/article/mg20527426.800-microsofts-bodysensing-buttonbusting-controller.html?DCMP=OTC-rss&nsref=tech)
Any other game code latency = ??ms
Latency from Xbox to display = ??ms
Display latency = 0.02ms-0.08ms depending on display type.
There are a couple questionable steps. At minimum, the system has around ~63.33ms of latency(this does not include display latency, game code latency, etc). The article also mentions that the skeletel system can be generated from any single frame.
| rccsetzer said: Impossible to play street fighter IV using Natal. |
I was hoping we would never see a fighting game for Natal, how are you supposed to do a shoriuken? Then I saw the Killer Instinct 3 thread....
Tag(thx fkusumot) - "Yet again I completely fail to see your point..."
HD vs Wii, PC vs HD: http://www.vgchartz.com/forum/thread.php?id=93374
Why Regenerating Health is a crap game mechanic: http://gamrconnect.vgchartz.com/post.php?id=3986420
gamrReview's broken review scores: http://gamrconnect.vgchartz.com/post.php?id=4170835

| JaggedSac said: Why would they need to track motion? |
To determine if (parts of) the skeleton actually move(s).....?
well I think we can put to rest the issue of lag now, clearly it is not a problem, and all of these tests are still being done with the old build.
Good improvement from it's debut. I hope to see this time reduced further through a proper analysis.
Follow Me: twitter.com/alkamiststar
Watch Me: youtube.com/alkamiststar
Play Along: XBL & SEN : AlkamistStar
drkohler said:
To determine if (parts of) the skeleton actually move(s).....? |
They are not mapping gestures to model movement. The software is generating an X,Y,Z position for each point on the skeletel system. Each frame the camera outputs, the game is just plopping those positions onto the model and rendering them to the screen. The only other thing necessary for this game is to determine the velocity of the parts of the body, so that the ball can react to softer and harder hits. In this case, only the previous positions for each point are needed. If the hand point moved this far from the previous frame, the ball should react this much to being hit. This has no bearing on latency. I guess you are thinking gestures are necessary here, but they are not. Everything in the demo is based on absolute positions.