KichiVerde said:
The big difference between the PSEye and Kinect is the depth camera. Without it the PSEye can only emulate what the Kinect does. Of course when you throw in the MOVE you get more versatility, but it still can't compare.
Depth sensors are no new thing, but Kinect is the first time the technology has been available for so cheap. Just look what people are doing with it.
http://kinecthacks.net/
The Wiimote and Move are limited in that their input is merely planes in a three dimensional space. Kinect uses its depth sensors to create actual three dimensional shapes. What people are doing with that input is unbelievable.
|
Both the wiimote and move can work in 3 dimensional space.
The wiimote is the most limited, the depth sensing only works if you keep the wiimote pointed at the sensor bar. The wiimote uses the distance it sees between the 2 infrared dots on the sensor bar to determine how far away you are. Not many games actually use this.
The ps-eye calculates the distance to the move by looking at the size and brightness of the ball on top of the move. The closer you are the bigger the ball looks to the camera. Ironically it doesn't work that well under bright lighting conditions. It has warned me numerous times already to close the curtains when the low wintersun shines directly into the window. Tumble makes excellent use of the 3d positioning and works best if you're about 4 ft away from the camera so the size difference of the ball is most noticeable when you move forward and backward.
Kinect shines an infrared beam at the player. It uses the brightness image to calculate how far away you are. The brighter you look to the camera the closer you are. Direct sunlight can also interfere with this, as well as very reflective and non reflective materials.
The biggest advantage kinect has is that it is very easy to separate the player from the background with the infrared brightness image. (except when you're sunk into a comfy couch) A lot of developers only use this (hardware) background filtering and don't need the skeletal tracking. (eg your shape)
The ps-eye can only separate the player from the background by detecting movement. It compares the current picture to the previous picture to determine the differences. That works best under bright lighting conditions, any graininess in the image will interfere with the movement detection making it less precise.
As long as you move your whole body the software can also use the size of the moving area to determine the distance between the player and the camera. So for a kung fu full body type game ps-eye can do a pretty good job simulating kinect as long as there's plenty of light.
That video you linked can be done with any web-cam. As long as you have good lighting you only need to compare the current picture with the previous one to detect the movement and thus your fingers. (Some extra work needed to filter out your arm movements) Ofcourse it's 10 times easier with kinect, you only need to look at what's within a specific brightness range to filter out your fingers.