By using this site, you agree to our Privacy Policy and our Terms of Use. Close
drkohler said:
Aielyn said:
And who is this "Takahashi" from "VentureBeat"? ...

In short, don't pay any attention. This guy is even lower on the hierarchy of relevant opinions than analysts... ..

For somebody who doesn't seem to have the slightest clue about the technologies involved - judging from your "explanation", you do sound mighty convinced about yourself.

There is a significant bottleneck involved when rendering multiple screens on a single cpu <-> single gpu path, but that complexity is to broad to discuss here. Unfortunately this time, the analyst is completely correct in his assumptions and you are not.

I really do love how you assert that I don't "have the slightest clue". Wonderful argument there. Not at all fallacious.

Rendering to a TV and a Upad is exactly the same as rendering to two TV outs. It generates the pixels in the memory of the GPU, and then sends it to the display. The overhead is the same irrespective of where the pixels are being sent, and depends only on the generation of the pixels themselves.

The computational load involved for four-player splitscreen on a 1080p is exactly the same as one would have with three Upads and the TV screen if each one had a resolution of 540p (meaning, duplicating the pixels to take up double the size on the TV) - note that I'm not mentioning AA, etc, on the TV. The only difference is the direction that the data is sent in once the GPU has generated it. The only way that the CPU has to do more work is if the pixel data gets routed through the CPU on the way to the wireless interface - I'm assuming that they haven't set it up that way.

What matters isn't the displays, but what is being shown on them. In the context of 3D graphics, it is the number of "cameras" being used in the scene(s), because each camera needs to loop over the polygons in the scene(s).

And as I pointed out, adding a CPU and/or GPU to the Upad would only increase the latency as the system has to send the extra data to the Upad, and then the Upad needs to generate the pixels. By having the Upad as purely a display (not counting the inputs), rather than a separate computer, they can minimise latency.

Now, one might argue that having multiple GPUs in the console itself would help to reduce the problem - by putting the extra GPUs in the console, the data can be accessed directly rather than having to be sent across, and each "camera" would be able to be generated in hardware parallel. A CPU for each GPU would then work reasonably well together, in theory. And this would also eliminate the problem of extra Upads costing more money (although the cost would be shifted to the console itself, which might be just as bad).

But then, the system has a multicore CPU as it is, and a sufficiently strong GPU should be able to handle the extra output data... rumour has it that the GPU is quite advanced, relatively speaking (relative to current console GPUs).

Given that I've described things in detail, and your best response is "you're wrong, that gaming media guy is right", I'd suggest that you are the one who doesn't seem to actually know what he's talking about. You haven't pointed out a single flaw in my argument.