By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Adinnieken said:

The difference is that Kinect has always had superior voice command capabilities than PCs have ever had.

You open the mic, you define what you want to be said, you wait for the input, and associate that with a command.  With Kinect 1, certainly timing gestures is a challenge, because you'll never get a 1:1.  So you don't use Kinect 1 for a 1:1 input method.  Kinect 2 on the other hand offers a far superior 50% performance rate on input recognition.  The perceptable difference between what you do and what happens on screen is below human recognition.  So with that in mind, the challenge would be that rather than wait for an input and animate based on that button input, the animation coordinates with the gesture.  I don't disagree that would take time, but again, once done, that's reusable code.

You and others make it sound as though developers need to relearn the wheel every time they want to implement Kinect into a game, they don't.  Not only do they net have to relearn the wheel, the more often they use the knowledge they've learned from doing it once, the more easily they will be able to recognize where it can be used, where it can't be used, and how to tune it both for input and presentation.  Which is exactly why Microsoft includes Kinect with the Xbox One.

Yes, that may mean that for a while developers only use voice commands, and others may try gestures, but as developers become more comfortable with Kinect development, they'll explore different uses.  That much I don't doubt. 

By the way, as suprising as it sounds, gesture recognition with Kinect requires less code than voice recognition.  Depending upon how complex the gestures are.  A 2D gesture requires very simple code, even if a 3D gesture is flattened to 2D.  However there is a little more work involved in a 3D gesture, but not much.

My brother does UI development for the purposes of aiding the disabled.  The software products he develops use Kinect as the sole input method. 

I do not disagree with you that Kinect use won't be easy to implement down the line.
I am of the opinion that 3rd party developers won't put much effort into balancing or optimizing the game for multiple, and in this case, very different input methods. If they can't even be bothered to get dual analog and mouse+kb to both work optimally, what is the chance that they go out of their way for Kinect 2.

For example take a new IP like Dishonered coming out on pc,xb1,ps3,wiiU. Let's say they expect 30% of sales on xb1 and half of that userbase excited for the Kinect features. How much extra effort will they put into adjusting the gameplay for 15% of players. Gameplay and input method both depend on eachother, the more input methods the less specific things you can do.

Exclusives might put more effort into balancing, but I would still expect their to be 1 preferred input method with which the game works best. So having Kinect in every box won't change much.