By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Nintendo - NX power - View Post

Pemalite said:

Of course. But that is because that is Intels bread-and-butter.
However if you look at the Pentium 3 it's evolution from Katmai to Coppermine (On-die L2 cache, faster FSB, etc'.) to Tualatin (Larger L2 Cache, faster FSB)  they look like only minor improvements on paper, however there was allot of reworking to make the most of the new process node.

Gecko and Broadway are based on the same core, yes, but they are different.

For example Gecko is based upon the PowerPC 750CXe whilst Broadway is based upon PowerPC 750CL.
The differences at a high level is insignificant, just like the difference between Katmai to Coppermine or Barton and Thorton...
However Broadway did gain higher core clocks, system bus, improved prefetching and newer instructions for graphics-related tasks, those instructions are actually similar to the ones found in Gecko, which had them added in over the stock 750CXe.

Now the jump Between Broadway to Espresso is significantly larger. It has to be to enable a multi-core design, but it does share the same bed as the other chips.

Broadway got new instructions over Gecko ?! I knew that Gecko got some extra instructions over the 750CXe as per Nintendo's request but I didn't see any new instructions for Broadway in those documents ... 

Starting from Appendix A in the Gecko document and Appendix A in the PPC 750CL document, I didn't find ANY difference when it came to instruction sets. If it wasn't listed in Appendix A of the PPC 750 document it was most likely in the Appendix B or in a different order. Unless IBM or I made a huge oversight, the Broadway isn't an improvement ISA wise ... 

I haven't taken a look at the instruction latencies that Broadway might have had an improvement over Gecko such as better division logic to execute the division operation faster but if I had to place my money right now I don't think that's likely ... 

Pemalite said:

Wii U's Espresso is 15Gflop for Double? Precision if I remember correctly. The GPU is only 11x faster in floats.

The Xbox One however... Has a 35Gflop Double precision CPU which means the GPU is Roughly 34x faster.
It will do 110Gflop single precision on the CPU, which means it's GPU is roughly 11x faster.

However, where Jaguar kicks it into another gear is with SIMD/AVX and where it's various technologies come into play like branch tree prediction which gives Jaguar a massive efficiency edge.

Yep, with heavy-branching scenario's, Espresso should punch well above it's weight thanks to that stupidly short pipeline.

The Espresso is 14.8 GFlops for single precision floats and 17.4 GFlops for double precision. The GPU is 11.9x faster in single precision and it most likely does not support double precision going by AMD's past tradition of disabling that capablity for lower end Terascale parts ...

X1's cat cores are capable of 112 GFlops for single precision and 66 GFlops for double precision. It's GPU has 1.3 TFlops of single precision performance so it's 11.6x faster than the CPU in that aspect and 81 GFlops when it comes to double precision ... 

Pemalite said:

Geometrically games are still simple and would still benefit from a simple tessellator such as "Truform".

Between the Geforce FX and Geforce 200 series (2003 and 2008 respectively. - 5 year gap) hardware only increased in geometry performance by about 3x, where-as shader performance increased by 150x.
Now the Geforce 400 series (Released in 2010, 2 years later.) boosted that by 8x and continued to increase from there.

Kinda' puts things in perspective.

I wouldn't say that games are geometrically simple anymore with games approaching 16 pixels per quad and the new game Dreams by Media Molecule shows that MICROPOLYGN rendering is viable is on current gen consoles! 

The bottlenecks of games from last generation do not apply to this generation. Shader programs these days aren't trivial when compared to the 6th generation. When your shading twice the amount of geometry then you will most likely need double the shading power and the Latte is already starved for shading power as it is so how exactly would the tessellator go about alleviating that ? 

It's not just about the tessellator! You should consider EVERYTHING in the system ... 

Pemalite said:

I actually did run some benchmarks on a Radeon 6450, 6570 on vgchartz at one point comparing geometry and general image quality of those cards, of course you couldn't have Tessellation dialed up with 1440P with max everything with 60fps, but there was still a decent marked increase in image quality if you kept things on Medium, 720P with 30fps.

I do agree that the jump isn't massive over the HD Twins, but unlike the Wii U you can see newer and more effects in a few games, where-as you almost needed a magnifying glass to tell the difference between the Playstation 3/Xbox 360.

An HD 6450 in Unigine Heaven pulled off 8 fps at 1200x800 on low settings. There's no way that turd can keep 30 fps on medium at 720p ...

What can the WII U do that the HD twins couldn't at near same performance ?