yeah we get it, the WiiU is not going to be a powerhouse, we already know that, can we please move on?
yeah we get it, the WiiU is not going to be a powerhouse, we already know that, can we please move on?
Soleron said:
I don't think he understands technology at all. -- Rendering to two screens is as hard as rendering to one screen with twice the resolution or framerate, and the tablet screen is optional much like the second screen on the DS was. Number of cores/GPUs doesn't come into it; game devs can just have the tablet screen blank if they want to make the TV screen look better. |
Wait what? The number of cores and the GPU plays a huge role in its capabilities to allow this..
If it was properly tested, they wouldn't have slow downs for using two controlers. It spells terrible R&D for the console really. Just another reason as of late to not get it. I am actually surprised that this is a real problem. If true, than the WiiU is a lot slower than I first anticipated. What kid of crappy GPU are they using? Not really a question. Just rambling here on the WiiU's potential downfalls in tech. It just sounds like these are problems they ran into and didn't care to resolve to remain a bargain console so it can have a shot to compete with the 360 and PS3. It doesn't even sound like they fixed the faults in tech that the PS3/360 have and addressed to be resolved in the next generation. And on top of this you have underpowered tech to run the console with two sub controlers. The sub controllers screens are prolly like 480i or something. The WiiU will probably only run games at 720p, as that is the norm. Ugh... I don't get it.
What the F- Nintendo... I am lost with their direction.
Galaki said:
I was going to post this but couldn't be bother to look for who Takahashi is. Sounds like I was right to not post since he is unlikely to know how to code "Hello World!" in html. I can't defend that. |
:(
Well, maybe tomorrow at the next doomed Wii U thread! :D
Proud to be the first cool Nintendo fan ever
Number ONE Zelda fan in the Universe
Prediction: No Zelda HD for Wii U, quietly moved to the succesor
Predictions for Nintendo NX and Mobile


Soleron said:
You're not getting it. I know how a GPU works and what all the parts do, and the effect on performance. All that is nothing to my actual point that a "single processor" operating on a 2x larger screen is still 2x the work for a completely different image of equal complexity being on the other half (notice I said it's NOT 2x the work for a blank screen, and infer that there is a gradient of work between those two if some is shared), while the OP article claims it's insufficient as if 2 GPUs would be better when they are not due to overhead. Also thre is no such thing as a fragment shader or vertex shader any more. All the computational units can perform both tasks. |
All I will say is, the OP article is wrong about that the fact that a single GPU can run these fine without issues. 2 GPU's better? Describe better? Really can mean anything. He blows my mind. I will do the same now. 3 GPU's is even better! One GPU is plenty to handle this application. With news on the lack in power, makes me worried about the end specs of the WiiU. Seems underpowered.
Mr Khan said:
I always got Takahashi confused with Microsoft's Shane Kim, but that's aside the point. |
Maybe Dean Takahashi actually is the guy I'm thinking of.
Did Dean Takahashi write for Kotaku or a similar sounding website?
Speaking of Shane Kim, he seems to have disappeared.
drkohler said:
For somebody who doesn't seem to have the slightest clue about the technologies involved - judging from your "explanation", you do sound mighty convinced about yourself. There is a significant bottleneck involved when rendering multiple screens on a single cpu <-> single gpu path, but that complexity is to broad to discuss here. Unfortunately this time, the analyst is completely correct in his assumptions and you are not. |
I really do love how you assert that I don't "have the slightest clue". Wonderful argument there. Not at all fallacious.
Rendering to a TV and a Upad is exactly the same as rendering to two TV outs. It generates the pixels in the memory of the GPU, and then sends it to the display. The overhead is the same irrespective of where the pixels are being sent, and depends only on the generation of the pixels themselves.
The computational load involved for four-player splitscreen on a 1080p is exactly the same as one would have with three Upads and the TV screen if each one had a resolution of 540p (meaning, duplicating the pixels to take up double the size on the TV) - note that I'm not mentioning AA, etc, on the TV. The only difference is the direction that the data is sent in once the GPU has generated it. The only way that the CPU has to do more work is if the pixel data gets routed through the CPU on the way to the wireless interface - I'm assuming that they haven't set it up that way.
What matters isn't the displays, but what is being shown on them. In the context of 3D graphics, it is the number of "cameras" being used in the scene(s), because each camera needs to loop over the polygons in the scene(s).
And as I pointed out, adding a CPU and/or GPU to the Upad would only increase the latency as the system has to send the extra data to the Upad, and then the Upad needs to generate the pixels. By having the Upad as purely a display (not counting the inputs), rather than a separate computer, they can minimise latency.
Now, one might argue that having multiple GPUs in the console itself would help to reduce the problem - by putting the extra GPUs in the console, the data can be accessed directly rather than having to be sent across, and each "camera" would be able to be generated in hardware parallel. A CPU for each GPU would then work reasonably well together, in theory. And this would also eliminate the problem of extra Upads costing more money (although the cost would be shifted to the console itself, which might be just as bad).
But then, the system has a multicore CPU as it is, and a sufficiently strong GPU should be able to handle the extra output data... rumour has it that the GPU is quite advanced, relatively speaking (relative to current console GPUs).
Given that I've described things in detail, and your best response is "you're wrong, that gaming media guy is right", I'd suggest that you are the one who doesn't seem to actually know what he's talking about. You haven't pointed out a single flaw in my argument.
this Guy really doesnt seem like he knows what he is talking about..
Sounds to me like the guy gets paid to talk.. and talk he did.
Didn't seem to know much about games or gaming technology. He is the type of tool that prevents innovative games or new IP projects from being undertaken.
Why are most of you thinking black and white?
There is SOME point to what he says. Rendering the graphics of a single TV screen of course requires less computing than rendering the same TV screen graphics PLUS the graphics of two additional screens.
It is possible to think of scenarios where this could be a bottleneck, but in practice, this is probably seldom going to be much of a problem. As others have already pointed out, the tablet screens will often only be used to display menus, maps, inventory etc., simple 2D graphics that require hardly any computation. Furthermore, most games will probably support only one WiiU-Controller anyway.
Personally, I am sceptical about the WiiU. But I believe that in a few years when we'll be discussing the problems that stopped the WiiU from being a bigger success, we will be talking more about the somewhat limited appeal of asymmetrical gaming than this.