By using this site, you agree to our Privacy Policy and our Terms of Use. Close
SvennoJ said:
Adinnieken said:
SvennoJ said:

Even with re-using code, it still takes time to figure out how to apply it to new games. The more complex the input method the more testing is needed. For example is it physically possible to make certain gestures one after another in the allotted time. Does the game need to be slowed down to allow for gestures to be completed or commands to be spoken in between the actions.
It's not just receiving the inputs from Kinect, it's also tailoring the game to an extra input method, and balancing the gameplay between both. A lot of developers don't even bother anymore to optimize a game for mouse use on PC, certainly those APIs are mature enough.

Sure some tacked on Kinect use is easy to add in the game, I wouldn't expect much from it from 3rd parties though.

In some game types, yes.  Game types that require timed gestures.  Not all game types would require that however.  Skyrim's voice commands, for example, allows you to speak at any time.  It's an open mic, so it Kinect is waiting for you to say something.  You just have to say it and when you do, the game interprets the input as a command.  It looks for a gesture movement, and if it sees it, it interpets that gesture as the command.  There's no timing involved, other than the gesture itself must be executed appropriately.

Yes, you could have situations where the numbers and complexity of moves would require tight timing.  I'm not arguing that you can't have situations where you need to tune the input actions to the gameplay.  But that doesn't necessarily require significant resources to accomplish.  Coding is what requires significant resources, not tuning.  Coding requires knowledgable developers, time, and money.  Tuning requires beta/user testing.   Reusing code signficantly reduces coding time, if it didn't everyone would write their own gaming engines from scratch every time they wrote a new game.  That doesn't happen, because it's absolutely proposterous. 

Trying to suggest that tweaking code to tune the game play requires a significant investment is a stretch.  The only instance where that would be true is with a game where the input method was solely Kinect and timed events were the rule not the exception.  I have no doubt, for example, that Dance Central required a significant amount of time to tune initially.  However, once they had the timings down, that code got reused with Dance Central 2 and now with the new Disney Fantasia game.  The tuning required would have been miminal after that initial tuning as necessary for the performance improvements in Kinect. 

So once a developer builds the code and tunes it, it can easily be ported to other games.  So I, respectfully, disagree with your premise.

And I have to respectfully disagree with your conclusion.

Having worked for years with user interfaces for touch screen devices, indeed the actual coding part was very little with re-use of all our tools. That didn't do much for the work load though. 1 programmer to implement the UI, a UI designer, graphic artists, a group investigating use case scenarios, and a test team to try out usability and find inconcistencies and improvements.
Adding voice control put the whole UI design under a giant loop again, things that worked smoothly with only touch screen didn't make much sense by voice input and vice versa. That required a big change in code.

It's not a problem for games made specifically for 1 input device, Kinect for example, but balancing vastly different types of input requires a lot of extra work. Whether that extra work will be done remains to be seen.
Take From dust for example, certain levels were near impossible to complete with mouse+kb, while they were a breeze with a controller. (They were tuned for constant speed movement and analog control over sand release) You can't simply add another control method and put better with Kinect on the box.

As for Skyrim, it's menus were horrible to begin with, voice control was a valid option there. The game could really have benefitted from a configurable quick select wheel though. And how does it not have an effect on gameplay where in ME3 the game pauses when you select another attack, yet keeps going while you voice it and wait for a response. Either the game has to be tuned for that, or hard difficulty is extra hard with Kinect.

And there's the FPS example as well. FPS games have changed radically migrating from kb+mouse to dual analog. Having a library with Kinect inputs is the easy part.

I agree with one case scenario, once a developer has done all the hard work for their first game, they can turn out sequel after sequel using the exact same input method. Sounds boring to me.

The difference is that Kinect has always had superior voice command capabilities than PCs have ever had.

You open the mic, you define what you want to be said, you wait for the input, and associate that with a command.  With Kinect 1, certainly timing gestures is a challenge, because you'll never get a 1:1.  So you don't use Kinect 1 for a 1:1 input method.  Kinect 2 on the other hand offers a far superior 50% performance rate on input recognition.  The perceptable difference between what you do and what happens on screen is below human recognition.  So with that in mind, the challenge would be that rather than wait for an input and animate based on that button input, the animation coordinates with the gesture.  I don't disagree that would take time, but again, once done, that's reusable code.

You and others make it sound as though developers need to relearn the wheel every time they want to implement Kinect into a game, they don't.  Not only do they net have to relearn the wheel, the more often they use the knowledge they've learned from doing it once, the more easily they will be able to recognize where it can be used, where it can't be used, and how to tune it both for input and presentation.  Which is exactly why Microsoft includes Kinect with the Xbox One.

Yes, that may mean that for a while developers only use voice commands, and others may try gestures, but as developers become more comfortable with Kinect development, they'll explore different uses.  That much I don't doubt. 

By the way, as suprising as it sounds, gesture recognition with Kinect requires less code than voice recognition.  Depending upon how complex the gestures are.  A 2D gesture requires very simple code, even if a 3D gesture is flattened to 2D.  However there is a little more work involved in a 3D gesture, but not much.

My brother does UI development for the purposes of aiding the disabled.  The software products he develops use Kinect as the sole input method.