Pemalite said:
JRPGfan said: Does anyone worry about the 50 GB/s memory bandwidth of this new Tegra X2 ? It just seems low compaired to the PS4s 176 GB/s. |
It will be well suited to 720P.
But you need to keep in mind that bandwidth numbers cannot be directly compared as Tegra employs tiled based rendering and has a slew of compression tricks to make better use of that 50GB/s.
But this is also a chip that isn't going to compete with the Playstation 4 or Xbox One anyway.
Miyamotoo said:
Not at all, Xbox One has 68 GB/s while Wii U has only 12.8 GB/s, so basicly Tegra X2 has 4x more time memory bandwidth than Wii U.
This is one more reason why most likly NX will have Tegra X2 not X1, X1 has only 25 GB/s memory bandwidth. Actualy X2 memory memory bandwidth will probably be biggest gain for NX comparision to X1.
|
But we need to put things into perspective, it's useless grabbing various numbers and comparing the chips on that alone, it's not that black and white.
For example... With Maxwell (Tegra X1) nVidia introduced Delta Colour Compression and nVidia was able to get a 17-29% increase in effective bandwidth. With Pascal, nVidia further optimised that technology and managed to get another 20%.
So their usable bandwidth would likely be (I'll use 20% for both) 30GB/s for Maxwell based Tegra X1 and 72GB/s for Pascal based Tegra X2, of course there is varience there and it can be higher/lower.
Also nVidia can make better use of it's bandwidth thanks to it's tiled based approach, there is simply less waste.
Obviously it still can't hold a candle to the Playstation 4... And the Xbox One still has a big edge thanks to the eSRAM, so it's still not black and white.
JEMC said: The Tegra Parker/X2 isn't much of an improvement over the X1 in terms of graphics power, and most of the gains in compute (GFlops) are likely due to the much stronger CPUs. In my opinion, if Nintendo goes with Nvidia, the X1 would be the better choice. That said, if Nintendo is serious about that Supplemental Compute Device, the beefier CPU of the X2 could come in handy to avoid some bottlenecks... but at that point memory bandwidth may be their biggest problem. |
Or GPU clocks, we know Pascal is a clocking monster due to nVidia reworking the chip to achieve higher clockspeeds. Plus FinFet has some favourable power characteristics to push that home. :)
TheLastStarFighter said:
Looking at this link again, and seeing the setup of the Tegra X2 in the chart, it has me thinking that the NX portable could use a Tegra, while the SCD could potentially use one of the "unknown" Pascal cards for additional graphics processing. Essentially, 50% of the setup above. When going solo, the NX could operate with 650 (or less) GFLOPs that the Tegra could provide. But when docked, the system would have an additional non-mobile graphics card. No reason it would have to be a second Tegra. A different card would make much more sense, and could boost a docked NX to 3 or 4 TFLOPs.
|
I can see costs blowing out. nVidia isn't exactly known for being cheap.
Not only that, but nVidia's Multi-GPU technology has never been that flexible as far as I know.
Tenebrae77 said:
"So basically based on nothing"
Like everything you say.
"Shitty home performance given all things but at least it's a reasonable upgrade on the Wii U and not the GameCube to Wii all over again."
HAHAHAHAHAHAHA
750 gflops is LESS of a jump from wii u than wii was to GC. Go home, you're drunk. It's so pathetic that you actually think Nintendo's next home console will not be a lot more powerful than ps4, or moderately more powerful than ps4, or equal to ps4, or weaker than ps4, or weaker than X1, or only a little stronger than wii u (625-650 gflops).
|
Based on what? Gflops? Please. There is more to GPU's and graphics than single precision compute.
Miyamotoo said:
Well to be fair Wii U also has eDRAM, that is similar to that eSRAM in XB1.
|
The Wii U didn't have enough of the stuff. Pretty sure latency might have been higher than the Xbox One's eSRAM as well.
Besides, the Wii U's main limiter was that GPU and CPU.
It's like a waterpipe... Think of Memory bandwidth as the size of the pipe and think of the CPU and GPU as the amount of water flowing through, if the CPU and GPU aren't big enough and can't fully utilise the size of the pipe, then the bigger pipe is wasted isn't it?
|