Solid_Snake4RD said:
they could have meant that the KINECT which is rumoured not using or using an extra processor.if KINECT doesn't have its own processor then it could be true |
AFAIK Kinect takes a 'raw' image from the depth camera, and processes it (locating bodies/skeletons) with the Xenon processor/system resources.
[The chip that primesense use for the post-processing only tracks 2 skeletons, (AFAICT) only produces 2d skeletons and it would add lag to the system - so for kinect, this was sensibly not used]
So, yes that would be directly comparable to doing it directly on the cell processor.
.....
However... if the device is (as stated) being used for 3d chat, then it likely has nothing (whatsoever) to do with Kinect.
Expected eyetoy3d: 2xRGB camera.
Kinect: 1xRGB camera, 1 'depth sensor'.
For 'skeletal detection':
- kinect produces an accurate 3d depth map that gets post-processed to a fairly-accurate skeleton.
- eyetoy3d gets 2 images that would need insane amounts of "black magic" to find any objects.
For 'video chat':
- kinect would have to 'estimate/calculate' left/right eye, along with guessing the bits that it can't see. It should be able to produce *an* image, but not 3d exactly.
- eyetoy3d gets 2 images, sends those directly to the output TV - and immediately has 'perfect' 3d chat.
If true, this looks more like Sony pushing harder into 3dtvs/"3ds-style tech" than Sony showing an interest in Kinect. (it should also be relatively cheap... ~2x the cost of a standard eyetoy as it's literally another camera increased bandwidth).







