JaggedSac said: Here is one idea of how a Natal UI could work. Instead of an interface where there are no graphical displays and control would be done through gestures alone, there could be onscreen buttons that are "clicked". With both hands you would make a sweeping upwards gesture to bring up the interface, which would contain the buttons corresponding to whatever it is that needs interacting with. For example a movie interface, would have buttons for forward, pause, rewind, etc. Once the hands are swept up, the interface comes on screen. Natal cannot tell where on the screen you are pointing, so there will be an onscreen effect(a swirling effect, or perhaps a hand) to represent where your hands are so you can move to the buttons. The way the effect is placed on screen would be determining the hand position relative to the shoulders and extrapolate that over the resolution of the display device. This way, the person is interacting with the screen relative to themselves and not the screen directly. IE, the button is on the bottom left of the screen, the person would move their hand down to the bottom left of their body no matter where they are sitting. This could also be calculated for people sitting in rotations less than 90 degrees from Natal. People could relate to the effect on screen and quickly interact with whatever it is that needs interacting with. This will of course make more sense to your brain if your body is at least mostly facing the screen. The "click" could be performed by moving the hand towards the screen and back twice in succession. Much like punching, or tapping the button. Because Natal can detect depth quite easily, the movement does not have to be very much, but in order avoid false positives, a decent amount of movement would probably provide a better experience. I am not saying a graphic less UI would not work with Natal, but that there is more than one way to do this. This method could even work for several games. Imagine a mech warrior type game where you are in a cockpit and all the controls are represented on screen. You would maneuver by grabbing some joysticks relative to your body. Push and pull levers to speed up/slow down. That would be feasible using this method. So you would have a 1:1 relationship with this virtual cockpit. |
2 things I really like about your post.
A) That mech game sounds fun =P
B) You're actually trying to come up with some way that Natal could do this, unlike most people that say 'this could work' or 'I don't want a nod of my head to turn off the TV'. I think, so long as there isn't this lag issue people are crying about, that your idea could work well. As for whether it makes it more efficient, I don't know, it might need something more/else.
I wouldn't be surprised if you could program certain motions as shortcuts, not to mention you should be able to set sensitivity.
An idea I had, instead of motions, voice commands. Think of those movies and TV shows "Computer, make me coffee". You could start with something people don't say normally (like Natal, or whatever the final product name is) and then say a command. It's not like you would say Natal 20 times in a normal conversion, so it shouldn't interfere with talking. Also, you could set sensivity as well for that, so maybe you want Natal to pick up your voice even if you're fairly quiet, or you want to need to speak louder to input commands. Some of it is kind of silly, but it definitely gives a futuristic feel. The ONLY way this works though is if the voice-recognition is FAR superior to what's on the market today. "Call Becka" Phone says/shows on screen "Did you mean Jonathon?" haha.
Or a mix of voice and gestures. Depth/motion sensing, voice commands, there are plenty of possibilities. My one worry about whether it could really work or not has to do with lag. I know most people are saying it wouldn't be over 100 ms of lag, but if its over eve 50 ms, it will be very annoying. But we'll have to wait and see, I am excited about the potential/possibilities though.