| fatslob-:O said: Intel easily does way more to improve it's x86 architecture than IBM ever does for Nintendo's PowerPC 750 derivative and the modern core architecture is NOTHING like the P6 ... The Gecko and Broadway are literally identical, core for core aside from clocks and cache. If Marcan's words are to be believed, Espresso is just a higher clocked version of Broadway. That's a 10 year old CPU architecture without ANY sort of extensions ... |
Of course. But that is because that is Intels bread-and-butter.
However if you look at the Pentium 3 it's evolution from Katmai to Coppermine (On-die L2 cache, faster FSB, etc'.) to Tualatin (Larger L2 Cache, faster FSB) they look like only minor improvements on paper, however there was allot of reworking to make the most of the new process node.
Gecko and Broadway are based on the same core, yes, but they are different.
For example Gecko is based upon the PowerPC 750CXe whilst Broadway is based upon PowerPC 750CL.
The differences at a high level is insignificant, just like the difference between Katmai to Coppermine or Barton and Thorton...
However Broadway did gain higher core clocks, system bus, improved prefetching and newer instructions for graphics-related tasks, those instructions are actually similar to the ones found in Gecko, which had them added in over the stock 750CXe.
Now the jump Between Broadway to Espresso is significantly larger. It has to be to enable a multi-core design, but it does share the same bed as the other chips.
| fatslob-:O said: I think it is the X1 that has the best CPU to GPU performance ratio. In terms of floating point performance, it is the X1 that has the least skewed ratio when considering the Latte has 176 GFlops for a fair estimate. In terms of integer performance, it is also the X1 since Nintendo's PPC 750 derivative is weaker in this aspect than it's floating point performance. For branching, I think this is where Espresso may have an advantage but AMD's VLIW5 architecture was notorious for been poor in that aspect. With GCN you can actually write highly performant uber-shader code just like other modern GPU architectures and it even supports indirect branching too which further puts VLIW5 to shame. You can very much get more CPU performance on the HD twins in other ways like programming a GPU like GCN as if it were a CPU! Afterall the only thing special to a GPU are it's fixed function units ... |
Wii U's Espresso is 15Gflop for Double? Precision if I remember correctly. The GPU is only 11x faster in floats.
The Xbox One however... Has a 35Gflop Double precision CPU which means the GPU is Roughly 34x faster.
It will do 110Gflop single precision on the CPU, which means it's GPU is roughly 11x faster.
However, where Jaguar kicks it into another gear is with SIMD/AVX and where it's various technologies come into play like branch tree prediction which gives Jaguar a massive efficiency edge.
Yep, with heavy-branching scenario's, Espresso should punch well above it's weight thanks to that stupidly short pipeline.
| fatslob-:O said: For the majority of last gen it was FXAA, SMAA only started getting interesting on current gen ... |
FXAA is a variation of Morphological. :P It's nVidia's marketing terminology. SMAA is a variation of MLAA/FXAA.
| fatslob-:O said: It definitely has some differences with Terascale but Xenos is most certainly VLIW ... |
I know, I did confirm that. But it doesn't say much when the Radeon 9700Pro released in 2002 is also.
| fatslob-:O said: Games as well as hardware used to be very different back then. Most of the meshes in that time were only made up of hundreds of vertices and the truform was only built for small amounts of data expansion so that devs wouldn't go around abusing it too much. Games today are compute limited and increasing the amount of fragments will have some large impacts on shading and rasterization performance ... |
Geometrically games are still simple and would still benefit from a simple tessellator such as "Truform".
Between the Geforce FX and Geforce 200 series (2003 and 2008 respectively. - 5 year gap) hardware only increased in geometry performance by about 3x, where-as shader performance increased by 150x.
Now the Geforce 400 series (Released in 2010, 2 years later.) boosted that by 8x and continued to increase from there.
Kinda' puts things in perspective.
| fatslob-:O said: We did wait for more software and you'd be hard-pressed to find anyone arguing that the WII U has a definitive or absolute edge over the sub-HD twins in performance ... An HD 6450 is pathetic even in Unigine Heaven ... |
I actually did run some benchmarks on a Radeon 6450, 6570 on vgchartz at one point comparing geometry and general image quality of those cards, of course you couldn't have Tessellation dialed up with 1440P with max everything with 60fps, but there was still a decent marked increase in image quality if you kept things on Medium, 720P with 30fps.
I do agree that the jump isn't massive over the HD Twins, but unlike the Wii U you can see newer and more effects in a few games, where-as you almost needed a magnifying glass to tell the difference between the Playstation 3/Xbox 360.

www.youtube.com/@Pemalite








