By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - Will putting kinect in every box really change anything?

kinect is mostly for casuals and casuals never pay 499.



Tsubasa Ozora

Keiner kann ihn bremsen, keiner macht ihm was vor. Immer der richtige Schuss, immer zur richtigen Zeit. Superfussball, Fairer Fussball. Er ist unser Torschützenkönig und Held.

Around the Network
Adinnieken said:
SvennoJ said:

Even with re-using code, it still takes time to figure out how to apply it to new games. The more complex the input method the more testing is needed. For example is it physically possible to make certain gestures one after another in the allotted time. Does the game need to be slowed down to allow for gestures to be completed or commands to be spoken in between the actions.
It's not just receiving the inputs from Kinect, it's also tailoring the game to an extra input method, and balancing the gameplay between both. A lot of developers don't even bother anymore to optimize a game for mouse use on PC, certainly those APIs are mature enough.

Sure some tacked on Kinect use is easy to add in the game, I wouldn't expect much from it from 3rd parties though.

In some game types, yes.  Game types that require timed gestures.  Not all game types would require that however.  Skyrim's voice commands, for example, allows you to speak at any time.  It's an open mic, so it Kinect is waiting for you to say something.  You just have to say it and when you do, the game interprets the input as a command.  It looks for a gesture movement, and if it sees it, it interpets that gesture as the command.  There's no timing involved, other than the gesture itself must be executed appropriately.

Yes, you could have situations where the numbers and complexity of moves would require tight timing.  I'm not arguing that you can't have situations where you need to tune the input actions to the gameplay.  But that doesn't necessarily require significant resources to accomplish.  Coding is what requires significant resources, not tuning.  Coding requires knowledgable developers, time, and money.  Tuning requires beta/user testing.   Reusing code signficantly reduces coding time, if it didn't everyone would write their own gaming engines from scratch every time they wrote a new game.  That doesn't happen, because it's absolutely proposterous. 

Trying to suggest that tweaking code to tune the game play requires a significant investment is a stretch.  The only instance where that would be true is with a game where the input method was solely Kinect and timed events were the rule not the exception.  I have no doubt, for example, that Dance Central required a significant amount of time to tune initially.  However, once they had the timings down, that code got reused with Dance Central 2 and now with the new Disney Fantasia game.  The tuning required would have been miminal after that initial tuning as necessary for the performance improvements in Kinect. 

So once a developer builds the code and tunes it, it can easily be ported to other games.  So I, respectfully, disagree with your premise.

And I have to respectfully disagree with your conclusion.

Having worked for years with user interfaces for touch screen devices, indeed the actual coding part was very little with re-use of all our tools. That didn't do much for the work load though. 1 programmer to implement the UI, a UI designer, graphic artists, a group investigating use case scenarios, and a test team to try out usability and find inconcistencies and improvements.
Adding voice control put the whole UI design under a giant loop again, things that worked smoothly with only touch screen didn't make much sense by voice input and vice versa. That required a big change in code.

It's not a problem for games made specifically for 1 input device, Kinect for example, but balancing vastly different types of input requires a lot of extra work. Whether that extra work will be done remains to be seen.
Take From dust for example, certain levels were near impossible to complete with mouse+kb, while they were a breeze with a controller. (They were tuned for constant speed movement and analog control over sand release) You can't simply add another control method and put better with Kinect on the box.

As for Skyrim, it's menus were horrible to begin with, voice control was a valid option there. The game could really have benefitted from a configurable quick select wheel though. And how does it not have an effect on gameplay where in ME3 the game pauses when you select another attack, yet keeps going while you voice it and wait for a response. Either the game has to be tuned for that, or hard difficulty is extra hard with Kinect.

And there's the FPS example as well. FPS games have changed radically migrating from kb+mouse to dual analog. Having a library with Kinect inputs is the easy part.

I agree with one case scenario, once a developer has done all the hard work for their first game, they can turn out sequel after sequel using the exact same input method. Sounds boring to me.



It may change games, but not in a way I want to. I would rather use a controller.



It probably wont change the core gamers mind unless the non motion controls are amazing at least till the software is there. But from a developers view it means a lot knowing that every x1 has one. Now if they decided to put kinect features in they know that every person that buts there game can use them



kljesta64 said:

kinect is mostly for casuals and casuals never pay 499.


This statement could not be more true.



Around the Network

I hope it will be a success amongst couples and girls, I can't wait for tons of horny housewives unaware amateur porn to leak on teh interwebs!!!   



Stwike him, Centuwion. Stwike him vewy wuffly! (Pontius Pilate, "Life of Brian")
A fart without stink is like a sky without stars.
TGS, Third Grade Shooter: brand new genre invented by Kevin Butler exclusively for Natal WiiToo Kinect. PEW! PEW-PEW-PEW! 
 


SvennoJ said:
Adinnieken said:
SvennoJ said:

Even with re-using code, it still takes time to figure out how to apply it to new games. The more complex the input method the more testing is needed. For example is it physically possible to make certain gestures one after another in the allotted time. Does the game need to be slowed down to allow for gestures to be completed or commands to be spoken in between the actions.
It's not just receiving the inputs from Kinect, it's also tailoring the game to an extra input method, and balancing the gameplay between both. A lot of developers don't even bother anymore to optimize a game for mouse use on PC, certainly those APIs are mature enough.

Sure some tacked on Kinect use is easy to add in the game, I wouldn't expect much from it from 3rd parties though.

In some game types, yes.  Game types that require timed gestures.  Not all game types would require that however.  Skyrim's voice commands, for example, allows you to speak at any time.  It's an open mic, so it Kinect is waiting for you to say something.  You just have to say it and when you do, the game interprets the input as a command.  It looks for a gesture movement, and if it sees it, it interpets that gesture as the command.  There's no timing involved, other than the gesture itself must be executed appropriately.

Yes, you could have situations where the numbers and complexity of moves would require tight timing.  I'm not arguing that you can't have situations where you need to tune the input actions to the gameplay.  But that doesn't necessarily require significant resources to accomplish.  Coding is what requires significant resources, not tuning.  Coding requires knowledgable developers, time, and money.  Tuning requires beta/user testing.   Reusing code signficantly reduces coding time, if it didn't everyone would write their own gaming engines from scratch every time they wrote a new game.  That doesn't happen, because it's absolutely proposterous. 

Trying to suggest that tweaking code to tune the game play requires a significant investment is a stretch.  The only instance where that would be true is with a game where the input method was solely Kinect and timed events were the rule not the exception.  I have no doubt, for example, that Dance Central required a significant amount of time to tune initially.  However, once they had the timings down, that code got reused with Dance Central 2 and now with the new Disney Fantasia game.  The tuning required would have been miminal after that initial tuning as necessary for the performance improvements in Kinect. 

So once a developer builds the code and tunes it, it can easily be ported to other games.  So I, respectfully, disagree with your premise.

And I have to respectfully disagree with your conclusion.

Having worked for years with user interfaces for touch screen devices, indeed the actual coding part was very little with re-use of all our tools. That didn't do much for the work load though. 1 programmer to implement the UI, a UI designer, graphic artists, a group investigating use case scenarios, and a test team to try out usability and find inconcistencies and improvements.
Adding voice control put the whole UI design under a giant loop again, things that worked smoothly with only touch screen didn't make much sense by voice input and vice versa. That required a big change in code.

It's not a problem for games made specifically for 1 input device, Kinect for example, but balancing vastly different types of input requires a lot of extra work. Whether that extra work will be done remains to be seen.
Take From dust for example, certain levels were near impossible to complete with mouse+kb, while they were a breeze with a controller. (They were tuned for constant speed movement and analog control over sand release) You can't simply add another control method and put better with Kinect on the box.

As for Skyrim, it's menus were horrible to begin with, voice control was a valid option there. The game could really have benefitted from a configurable quick select wheel though. And how does it not have an effect on gameplay where in ME3 the game pauses when you select another attack, yet keeps going while you voice it and wait for a response. Either the game has to be tuned for that, or hard difficulty is extra hard with Kinect.

And there's the FPS example as well. FPS games have changed radically migrating from kb+mouse to dual analog. Having a library with Kinect inputs is the easy part.

I agree with one case scenario, once a developer has done all the hard work for their first game, they can turn out sequel after sequel using the exact same input method. Sounds boring to me.

The difference is that Kinect has always had superior voice command capabilities than PCs have ever had.

You open the mic, you define what you want to be said, you wait for the input, and associate that with a command.  With Kinect 1, certainly timing gestures is a challenge, because you'll never get a 1:1.  So you don't use Kinect 1 for a 1:1 input method.  Kinect 2 on the other hand offers a far superior 50% performance rate on input recognition.  The perceptable difference between what you do and what happens on screen is below human recognition.  So with that in mind, the challenge would be that rather than wait for an input and animate based on that button input, the animation coordinates with the gesture.  I don't disagree that would take time, but again, once done, that's reusable code.

You and others make it sound as though developers need to relearn the wheel every time they want to implement Kinect into a game, they don't.  Not only do they net have to relearn the wheel, the more often they use the knowledge they've learned from doing it once, the more easily they will be able to recognize where it can be used, where it can't be used, and how to tune it both for input and presentation.  Which is exactly why Microsoft includes Kinect with the Xbox One.

Yes, that may mean that for a while developers only use voice commands, and others may try gestures, but as developers become more comfortable with Kinect development, they'll explore different uses.  That much I don't doubt. 

By the way, as suprising as it sounds, gesture recognition with Kinect requires less code than voice recognition.  Depending upon how complex the gestures are.  A 2D gesture requires very simple code, even if a 3D gesture is flattened to 2D.  However there is a little more work involved in a 3D gesture, but not much.

My brother does UI development for the purposes of aiding the disabled.  The software products he develops use Kinect as the sole input method. 



ChrolloLucilfer said:
The Touch-pad on the PS4 has two major advantage over Kinect 2. Firstly the Touch-pad is a control method which has been used for a lot longer than Kinect and has already been fully accepted. Laptops have been using Touch pad as alternative to the mouse since the late 1990's. So any game developed on the PC which uses the mouse/Touch-pad is now a lot easier to be ported over to the ps4 in terms of controls. Ports of existing Tablet games are more a possibility now for PS4 and maybe even of some of the Wii U games touch based control could be ported over as well.

The second and the most important differences between the two is that the Inclusion of the Touch-pad is not a massive cost expense compared to Kinect. So even if Third Party ignore the Touch-pad it's not that much an issue to the PS4 in terms of their direction or an added cost to PS4 consoles. Kinect 2 on the other hand has forced Microsoft to go with a weaker system and 100$ higher price. Making the Kinect 2 mandatory at the cost of power and price shows Microsoft are making a big deal over Kinect 2 features. So if the Third Parties ignore Kinect it's would a massive problem for Microsoft to justify the Kinect 2 with only their software to showcase it.


i have a touch pad on my lap top you know what i use a mouse.  as for tablet games the difference is you see what youre touching not touching the screen waiting for a hot spot to show then moving.  a good example of is fruit ninja.  perfect for kinect would be horrible on the touch pad. kinect didnt force them to go with a weaker system if sony wouldve stayed with the 4gb the difference would have been neglible.  ms blunders was in the egineering department.  and lets be honest i said this before sony has great devs that could push the wii u into next gen they didnt need the most powerful they wanted the most powerful.  even if microsoft had the most power would we see more games than sony nope.   so ms went in the direction they wanted to go in.  its easy for every developer to add kinect even if its just to voice the navigational menu.

there are close to 30 million kinects in the wild.  they introduce kinect when the base was around 60 million.   now take the xbox one without kinect when would it reach the first million new kinect sales?  2 - 3 years?



Adinnieken said:

The difference is that Kinect has always had superior voice command capabilities than PCs have ever had.

You open the mic, you define what you want to be said, you wait for the input, and associate that with a command.  With Kinect 1, certainly timing gestures is a challenge, because you'll never get a 1:1.  So you don't use Kinect 1 for a 1:1 input method.  Kinect 2 on the other hand offers a far superior 50% performance rate on input recognition.  The perceptable difference between what you do and what happens on screen is below human recognition.  So with that in mind, the challenge would be that rather than wait for an input and animate based on that button input, the animation coordinates with the gesture.  I don't disagree that would take time, but again, once done, that's reusable code.

You and others make it sound as though developers need to relearn the wheel every time they want to implement Kinect into a game, they don't.  Not only do they net have to relearn the wheel, the more often they use the knowledge they've learned from doing it once, the more easily they will be able to recognize where it can be used, where it can't be used, and how to tune it both for input and presentation.  Which is exactly why Microsoft includes Kinect with the Xbox One.

Yes, that may mean that for a while developers only use voice commands, and others may try gestures, but as developers become more comfortable with Kinect development, they'll explore different uses.  That much I don't doubt. 

By the way, as suprising as it sounds, gesture recognition with Kinect requires less code than voice recognition.  Depending upon how complex the gestures are.  A 2D gesture requires very simple code, even if a 3D gesture is flattened to 2D.  However there is a little more work involved in a 3D gesture, but not much.

My brother does UI development for the purposes of aiding the disabled.  The software products he develops use Kinect as the sole input method. 

I do not disagree with you that Kinect use won't be easy to implement down the line.
I am of the opinion that 3rd party developers won't put much effort into balancing or optimizing the game for multiple, and in this case, very different input methods. If they can't even be bothered to get dual analog and mouse+kb to both work optimally, what is the chance that they go out of their way for Kinect 2.

For example take a new IP like Dishonered coming out on pc,xb1,ps3,wiiU. Let's say they expect 30% of sales on xb1 and half of that userbase excited for the Kinect features. How much extra effort will they put into adjusting the gameplay for 15% of players. Gameplay and input method both depend on eachother, the more input methods the less specific things you can do.

Exclusives might put more effort into balancing, but I would still expect their to be 1 preferred input method with which the game works best. So having Kinect in every box won't change much.



SvennoJ said:

I do not disagree with you that Kinect use won't be easy to implement down the line.
I am of the opinion that 3rd party developers won't put much effort into balancing or optimizing the game for multiple, and in this case, very different input methods. If they can't even be bothered to get dual analog and mouse+kb to both work optimally, what is the chance that they go out of their way for Kinect 2.

For example take a new IP like Dishonered coming out on pc,xb1,ps3,wiiU. Let's say they expect 30% of sales on xb1 and half of that userbase excited for the Kinect features. How much extra effort will they put into adjusting the gameplay for 15% of players. Gameplay and input method both depend on eachother, the more input methods the less specific things you can do.

Exclusives might put more effort into balancing, but I would still expect their to be 1 preferred input method with which the game works best. So having Kinect in every box won't change much.

My guess is since the Xbox 360 controller is available for the PC, they might be trying to push gamers to it.  The more they can move PC gamers away from the Keyboard and Mouse, the quicker we can have integrated gameplay between PC and console.

That said, I don't think developers are going to include Kinect functionality where it doesn't make sense or where it couldn't benefit the game.  Yes, I do believe they will implement it in instances where they may not be able to include the same functionality on another platform, but they won't be adding Kinect functionality just for the sake or hell of it.

As much as we like to profess that development studios do things for the economy of scale, they do actually do their own R&D and explore new methods and technologies.   Small studios, perhaps not, but your larger developers will definately be working on R&D to see how the technology works and how they can implement it.  Just like the  cloud features. 

If there's an advantage to any platform they'll look into it and explore it, if it makes sense for what they want to do, they'll use it.  You'll likely hear about a lot of companies taking advantage of Microsoft's Cloud offering, there's a good reason behind it, because they'll likely be hosting the PS4 game on those cloud servers as well as the PC, and Xbox One games.  The reason is Microsoft has priced their servers at such a competative rate that it won't make sense to use other hosts.

I think by including Kinect more developers will be utilizing it.  They may not be completely changing the game to focus on Kinect and making it substantially different from another platform version, but I do believe they'll use some feature of Kinect because they know it's there.  I believe some of that will even spill over to the PS4, because it wouldn't make much sense to build something like facial recognition or voice commands into a game and not have those features available on the PS4 version as well. 

I know people think that's crazy, but again, studios are taking the time right now to explore the hardware and see what they can do with it.  They are investing in that research and development now, so it hopefully can pay off later.  So, yeah.  Not going to persude me to think differently at the moment.

If Microsoft was bundling Kinect 2 with the Xbox One later in the lifespan of the console, then I might agree with you.  But with Kinect in every box, and available for developers to utilize, I think you'll see developers taking advantage of it.  Even if it is a basic function.