Pemalite said:
Soundwave said:
Assuming the home docked version of the NX can run at full clock/cores at punch at 625 Nvidia GFLOPS with 50GB/sec memory bandwidth and all the bandwidth saving techniques Nvidia has ...
Do you think PS4/XB1 ports would be possible at 1280x720? That would be 921,600 pixels to render instead of 2,073,600 pixels of 1080p.
Lets even on the CPU side that Nintendo is using something close/equal to the Jaguar core setup of the current consoles.
Just theoretically. It's possible Nintendo wouldn't even use that but I'm pretty curious what a system under these conditions could pump out.
50GB of memory bandwidth/sec is quite a bit if you save 30-60% due to various Nvidia techniques and you're only rendering 1/2 the pixels, no? The PS4's effective bandwidth from what I've heard is only 140GB/sec too. If Nvidia can save so much on the bandwidth side it could have been a huge reason as to why Nintendo chose Nvidia.
|
I think Playstation 4 and Xbox One ports would be entirely possible at 720P.
But there will still be quality reductions even from the Xbox One versions, Tegra simply doesn't have the pixel or texturing fillrate or even the geometry performance to have the same level of fidelity, maybe in a couple of years when there is a push to 10nm chips.
TheLastStarFighter said:
It could be a mid-tear, 2 TFLOP card or so. It could function like the setup for cars referenced above, or like an Alienware laptop PC that shuts down the mobile card when connected to SCD at home.
|
nVidia usually charges several hundred AUD for a mid-tier GPU. Not to mention Nintendo still needs to include memory, Mosfets/VRM/General Power delivery/Cooling/Case for that GPU.
And then Nintendo needs to provide the (What could be a fairly expensive) Tegra SoC, several gigabytes of memory for that, VRM/Mosfet/General Power delivery, Screen and supporting chips, battery and the case... Consoles are cost sensitive devices, you simply can't have everything.
We need to remember that nVidia's Multi-GPU technology simply isn't the most flexible multi-GPU technology on Earth... The memory pools don't get combined, they get duplicated and will need to be seperate. The faster GPU's will often run at the same speed as the slowest GPU. And that ignores the issue that SLI requires the use of the same chip to begin with, unless you are going to make a GPU only perform Physics calculations or some other compute task. (Perhaps Megatexturing?)
You also loose the ability to do Vsync and Triple buffering in Alternate Frame Rendering mode. - You can also get Microstutter and frame latency issues.
It would make more sense for nVidia to combine two Tegra SoC's to be honest.
And having the mobile chip "switch off" when docked is just a waste of potential... Especially if you are a user who would never take the device out of the dock.
|
What do you think two Tegra X2's in unison could accomplish (Base NX + A Hypothetical Supplement Compute Device with a second SoC)?
I'm kinda just curious to see how far a company could take these little chips, it's sorta fascinating.
I think the SCD would simply be a second Tegra X2 or maybe the same chip with more CUDA cores or something, Nintendo won't want to pay for an entirely seperate semi-custom design and by putting the same GPU in the SCD, it could lower Nintendo's costs by increasing mass production of the same chip.
Actually it is kinda interesting that the Parker Drive X2 giant board already utilizes two Tegra X2s in tandem, I wonder if their automotive work has forced Nvidia to become more used to multiple processor usage from that and maybe that's also where Nintendo's idea for the Supplemental Compute Device comes from.