By using this site, you agree to our Privacy Policy and our Terms of Use. Close
bonzobanana said:


There are benchmarks for the cpu and the wii u comes out very badly compared to 360 and PS3 as for the gpu you have written much text on what the wii u gpu has so surely you know the exact gflops performance to put us out of our misery. All I can see online is its a radeon but its full feature set and  performance is unknown however its on a poor fabrication process with many other parts integrated on the same silicon (including the wii gpu) and takes an absolutely tiny amount of power despite being a power hungry cheap fabrication process. The best arguments I've seen have put the gpu at 176 gflops when everything is taken into account and allowing for a few generations of improvement over the 360 gpu that is exactly how it is performing. However I think we all agree this is still a big improvement on 360 and PS3 but you are still left with the huge issue that the wii u has limited cpu processing and poor main memory bandwidth. 

 

"The best arguments I've seen have put the gpu at 176 gflops"

Amazing how many times you repeat this hot garbage.



Around the Network

The hardware of the portable like I said is more important than the console. I think people are looking at it all backwards in reality.

Even if the console matches/is slightly better than the PS4, it's not going to wow developers, they've seen this already with the Wii U (console several years late with the other two systems having a massive userbase lead).

The same thing will predictably happen with the NX console ... it will have a 0 userbase next to PS4 which will be at 45 million or more by then and XB1 which will be past 25-30 million and the NX version of Madden NFL or Call of Duty will end up selling 1/5th of the other versions, maybe even less, then the excuses from third parties will start about why they can't put as much effort into the NX version or why next year's version won't be coming out for NX or how they were "waiting to see" how other games did on NX and since they didn't do great, they won't be supporting NX for now.

The strength of the portable is the key to the NX, because the PS4/XB1 are not portable and the portable side is by far Nintendo's strongest hardware side.

If the portable is very powerful they will get at minimum decent Japanese support even better than what the 3DS gets now. The 3DS misses out on so many games because even despite being by far the Japanese market leader, it can't run anything but pathetic PS2 level graphics. Developers have to make basically a completely different version for the 3DS, but if Nintendo could close that gap, then they are in business for some good support that way at least.

Western devs will treat Nintendo like shit no matter what. Too little, too late, they would need a machine as powerful as the PS5 and that would be expensive and wouldn't scale well at all with any portable tech so unified platform would be basically unworkable for shared games. Not to mention the development nightmare for Nintendo teams of having to jump past an entire generation when they're already having problems with releasing Wii U games on time as is. The PORTABLE is the key to everything, the chipset should first be based for the needs of the portable, after that you can scale it up without thermal/power restrictions for the console, that's not as big of a deal. 



The more I think about it, the more It seems like it will be a 2017 launch. I assume Nintendo will go with AMD and the "NX" is unified ecosystem of hardware and software between a handheld and a home console. With that said, I expect for Nintendo to go with possibly Zen architecture for home console and K-12 architecture for portable. CPU power is up in the air compared to the PS4/XOne since number of cores, core to core improvements, and clock speed can vary. But GPU wise, I'd imagine it can be around XOne level. I imagine it will retail around $299 or less if it turns out to lack a disc drive, seperate gamepad, and maybe even a hard drive.



e=mc^2

Gaming on: PS4 Pro, Switch, SNES Mini, Wii U, PC (i5-7400, GTX 1060)

Solid-Stark said:

The more I think about it, the more It seems like it will be a 2017 launch. I assume Nintendo will go with AMD and the "NX" is unified ecosystem of hardware and software between a handheld and a home console. With that said, I expect for Nintendo to go with possibly Zen architecture for home console and K-12 architecture for portable. CPU power is up in the air compared to the PS4/XOne since number of cores, core to core improvements, and clock speed can vary. But GPU wise, I'd imagine it can be around XOne level. I imagine it will retail around $299 or less if it turns out to lack a disc drive, seperate gamepad, and maybe even a hard drive.


I doubt it. I think they will go with a mobile-centric design having studied what modern mobile chips are doing. AMD also has experience from the Mullins/Beema tech they just haven't followed through with that because PowerVR/Tegra/Samsung pretty much own the mobile market, but I'm sure if Nintendo asked they could be take their R&D on that end and expand it. 

Even on the Wii U, Nintendo did not really use an off-the-shelf AMD GPU. 

The console isn't really important to be honest. Because it can be hooked up to wall power, I mean anyone can give Nintendo the same amount of power roughly, PS4 level of power is nothing special these days, even a company like PowerVR could take a mobile-centric processor and scale it up to be in the same neighborhood. 

It's all the portable IMO. That's far more trickier to engineer, how far can you push that envelope in a battery based, restricted (likely fan-less) form factor. 

I think a 3:1 or 4:1 ratio for the portable to home version is ideal too, any more than that and it becomes an issue where the developer is basically going to have to start rebuilding assets or completely have to rework an engine from scratch which nullifies the whole point of a unified platform. 

That's going to be the engineering trick with the NX ... anyone can give you a NX class GPU, whether it's not Nvidia, AMD, PowerVR, Qualcomm, whoever. That's easy peasy. Can they get a portable that can realistically function in a unified platform setup where developers aren't forced to dramatically downscale their game engines to run on the portable? *That* is going to be the real million dollar question. 



Soundwave said:

The hardware of the portable like I said is more important than the console. I think people are looking at it all backwards in reality.

Even if the console matches/is slightly better than the PS4, it's not going to wow developers, they've seen this already with the Wii U (console several years late with the other two systems having a massive userbase lead).

The same thing will predictably happen with the NX console ... it will have a 0 userbase next to PS4 which will be at 45 million or more by then and XB1 which will be past 25-30 million and the NX version of Madden NFL or Call of Duty will end up selling 1/5th of the other versions, maybe even less, then the excuses from third parties will start about why they can't put as much effort into the NX version or why next year's version won't be coming out for NX or how they were "waiting to see" how other games did on NX and since they didn't do great, they won't be supporting NX for now.

The strength of the portable is the key to the NX, because the PS4/XB1 are not portable and the portable side is by far Nintendo's strongest hardware side.

If the portable is very powerful they will get at minimum decent Japanese support even better than what the 3DS gets now. The 3DS misses out on so many games because even despite being by far the Japanese market leader, it can't run anything but pathetic PS2 level graphics. Developers have to make basically a completely different version for the 3DS, but if Nintendo could close that gap, then they are in business for some good support that way at least.

Western devs will treat Nintendo like shit no matter what. Too little, too late, they would need a machine as powerful as the PS5 and that would be expensive and wouldn't scale well at all with any portable tech so unified platform would be basically unworkable for shared games. Not to mention the development nightmare for Nintendo teams of having to jump past an entire generation when they're already having problems with releasing Wii U games on time as is. The PORTABLE is the key to everything, the chipset should first be based for the needs of the portable, after that you can scale it up without thermal/power restrictions for the console, that's not as big of a deal. 

Ya I completely agree, I have yet to hear a compelling argument how releasing a PS4+ actually helps Nintendo. Simply getting ports of games on PS/XB won't cause owners of those consoles to move over to Nintendo, there would have to be some sort of amazing incentive for them to jump ship.



When the herd loses its way, the shepard must kill the bull that leads them astray.

Around the Network
zorg1000 said:
Soundwave said:

The hardware of the portable like I said is more important than the console. I think people are looking at it all backwards in reality.

Even if the console matches/is slightly better than the PS4, it's not going to wow developers, they've seen this already with the Wii U (console several years late with the other two systems having a massive userbase lead).

The same thing will predictably happen with the NX console ... it will have a 0 userbase next to PS4 which will be at 45 million or more by then and XB1 which will be past 25-30 million and the NX version of Madden NFL or Call of Duty will end up selling 1/5th of the other versions, maybe even less, then the excuses from third parties will start about why they can't put as much effort into the NX version or why next year's version won't be coming out for NX or how they were "waiting to see" how other games did on NX and since they didn't do great, they won't be supporting NX for now.

The strength of the portable is the key to the NX, because the PS4/XB1 are not portable and the portable side is by far Nintendo's strongest hardware side.

If the portable is very powerful they will get at minimum decent Japanese support even better than what the 3DS gets now. The 3DS misses out on so many games because even despite being by far the Japanese market leader, it can't run anything but pathetic PS2 level graphics. Developers have to make basically a completely different version for the 3DS, but if Nintendo could close that gap, then they are in business for some good support that way at least.

Western devs will treat Nintendo like shit no matter what. Too little, too late, they would need a machine as powerful as the PS5 and that would be expensive and wouldn't scale well at all with any portable tech so unified platform would be basically unworkable for shared games. Not to mention the development nightmare for Nintendo teams of having to jump past an entire generation when they're already having problems with releasing Wii U games on time as is. The PORTABLE is the key to everything, the chipset should first be based for the needs of the portable, after that you can scale it up without thermal/power restrictions for the console, that's not as big of a deal. 

Ya I completely agree, I have yet to hear a compelling argument how releasing a PS4+ actually helps Nintendo. Simply getting ports of games on PS/XB won't cause owners of those consoles to move over to Nintendo, there would have to be some sort of amazing incentive for them to jump ship.

Well I'm kind of pragmatic about this. 

The Nintendo fans that think Nintendo simply have equal/slightly better hardware is suddenly enough for the clock to turn back to 1991 and for Nintendo to regain their console dominance with all the third parties behind them are likely not being realistic. That's never going to happen. 

On the other hand ... I am becoming convinced that a portable that could approximate PS4/XB1 games at say 1/4 of the resolution and thus requiring things like 1/4th the memory bandwidth ... is possible. 

If you can have a portable that powerful, maybe it is almost like a new class of hardware device (seems like that would be right up Nintendo's ally too, a new type of hardware). 

Then the "console" brother/dock/whatever could run the same games just at 1080P with all effects turned up to max if the developer chooses to do that. 

That I think would work OK. Right now the 3DS should be getting more support than it does and a big reason why it's missing a lot of games IMO is because the hardware is too far behind what developers are working on today. It should have ALL the major Japanese 3rd party titles but that's impossible right now.

Kingdom Hearts III should be on NX. So should Metal Gear Solid V. So should Final Fantasy XV. So should Yakuza. So should Resident Evil 7. If you work hard enough to be the market leader in Japan you should at least be reaping the rewards of that, but Nintendo doesn't because they don't have adequete portable hardware (third parties including the Japanese don't give a crap about the Nintendo home consoles). 



fatslob-:O said:

Intel easily does way more to improve it's x86 architecture than IBM ever does for Nintendo's PowerPC 750 derivative and the modern core architecture is NOTHING like the P6 ... 

The Gecko and Broadway are literally identical, core for core aside from clocks and cache. If Marcan's words are to be believed, Espresso is just a higher clocked version of Broadway. That's a 10 year old CPU architecture without ANY sort of extensions ... 



Of course. But that is because that is Intels bread-and-butter.
However if you look at the Pentium 3 it's evolution from Katmai to Coppermine (On-die L2 cache, faster FSB, etc'.) to Tualatin (Larger L2 Cache, faster FSB)  they look like only minor improvements on paper, however there was allot of reworking to make the most of the new process node.

Gecko and Broadway are based on the same core, yes, but they are different.

For example Gecko is based upon the PowerPC 750CXe whilst Broadway is based upon PowerPC 750CL.
The differences at a high level is insignificant, just like the difference between Katmai to Coppermine or Barton and Thorton...
However Broadway did gain higher core clocks, system bus, improved prefetching and newer instructions for graphics-related tasks, those instructions are actually similar to the ones found in Gecko, which had them added in over the stock 750CXe.

Now the jump Between Broadway to Espresso is significantly larger. It has to be to enable a multi-core design, but it does share the same bed as the other chips.


fatslob-:O said:
I think it is the X1 that has the best CPU to GPU performance ratio. In terms of floating point performance, it is the X1 that has the least skewed ratio when considering the Latte has 176 GFlops for a fair estimate. In terms of integer performance, it is also the X1 since Nintendo's PPC 750 derivative is weaker in this aspect than it's floating point performance. For branching, I think this is where Espresso may have an advantage but AMD's VLIW5 architecture was notorious for been poor in that aspect. With GCN you can actually write highly performant uber-shader code just like other modern GPU architectures and it even supports indirect branching too which further puts VLIW5 to shame. You can very much get more CPU performance on the HD twins in other ways like programming a GPU like GCN as if it were a CPU! Afterall the only thing special to a GPU are it's fixed function units ...


Wii U's Espresso is 15Gflop for Double? Precision if I remember correctly. The GPU is only 11x faster in floats.

The Xbox One however... Has a 35Gflop Double precision CPU which means the GPU is Roughly 34x faster.
It will do 110Gflop single precision on the CPU, which means it's GPU is roughly 11x faster.

However, where Jaguar kicks it into another gear is with SIMD/AVX and where it's various technologies come into play like branch tree prediction which gives Jaguar a massive efficiency edge.

Yep, with heavy-branching scenario's, Espresso should punch well above it's weight thanks to that stupidly short pipeline.

fatslob-:O said:
For the majority of last gen it was FXAA, SMAA only started getting interesting on current gen ...

FXAA is a variation of Morphological. :P It's nVidia's marketing terminology. SMAA is a variation of MLAA/FXAA.

fatslob-:O said:
It definitely has some differences with Terascale but Xenos is most certainly VLIW ...


I know, I did confirm that. But it doesn't say much when the Radeon 9700Pro released in 2002 is also.

fatslob-:O said:
Games as well as hardware used to be very different back then. Most of the meshes in that time were only made up of hundreds of vertices and the truform was only built for small amounts of data expansion so that devs wouldn't go around abusing it too much. Games today are compute limited and increasing the amount of fragments will have some large impacts on shading and rasterization performance ...

Geometrically games are still simple and would still benefit from a simple tessellator such as "Truform".

Between the Geforce FX and Geforce 200 series (2003 and 2008 respectively. - 5 year gap) hardware only increased in geometry performance by about 3x, where-as shader performance increased by 150x.
Now the Geforce 400 series (Released in 2010, 2 years later.) boosted that by 8x and continued to increase from there.

Kinda' puts things in perspective.

fatslob-:O said:

We did wait for more software and you'd be hard-pressed to find anyone arguing that the WII U has a definitive or absolute edge over the sub-HD twins in performance ...

An HD 6450 is pathetic even in Unigine Heaven ...

 

I actually did run some benchmarks on a Radeon 6450, 6570 on vgchartz at one point comparing geometry and general image quality of those cards, of course you couldn't have Tessellation dialed up with 1440P with max everything with 60fps, but there was still a decent marked increase in image quality if you kept things on Medium, 720P with 30fps.

I do agree that the jump isn't massive over the HD Twins, but unlike the Wii U you can see newer and more effects in a few games, where-as you almost needed a magnifying glass to tell the difference between the Playstation 3/Xbox 360.





--::{PC Gaming Master Race}::--

Pemalite said:

Of course. But that is because that is Intels bread-and-butter.
However if you look at the Pentium 3 it's evolution from Katmai to Coppermine (On-die L2 cache, faster FSB, etc'.) to Tualatin (Larger L2 Cache, faster FSB)  they look like only minor improvements on paper, however there was allot of reworking to make the most of the new process node.

Gecko and Broadway are based on the same core, yes, but they are different.

For example Gecko is based upon the PowerPC 750CXe whilst Broadway is based upon PowerPC 750CL.
The differences at a high level is insignificant, just like the difference between Katmai to Coppermine or Barton and Thorton...
However Broadway did gain higher core clocks, system bus, improved prefetching and newer instructions for graphics-related tasks, those instructions are actually similar to the ones found in Gecko, which had them added in over the stock 750CXe.

Now the jump Between Broadway to Espresso is significantly larger. It has to be to enable a multi-core design, but it does share the same bed as the other chips.

Broadway got new instructions over Gecko ?! I knew that Gecko got some extra instructions over the 750CXe as per Nintendo's request but I didn't see any new instructions for Broadway in those documents ... 

Starting from Appendix A in the Gecko document and Appendix A in the PPC 750CL document, I didn't find ANY difference when it came to instruction sets. If it wasn't listed in Appendix A of the PPC 750 document it was most likely in the Appendix B or in a different order. Unless IBM or I made a huge oversight, the Broadway isn't an improvement ISA wise ... 

I haven't taken a look at the instruction latencies that Broadway might have had an improvement over Gecko such as better division logic to execute the division operation faster but if I had to place my money right now I don't think that's likely ... 

Pemalite said:

Wii U's Espresso is 15Gflop for Double? Precision if I remember correctly. The GPU is only 11x faster in floats.

The Xbox One however... Has a 35Gflop Double precision CPU which means the GPU is Roughly 34x faster.
It will do 110Gflop single precision on the CPU, which means it's GPU is roughly 11x faster.

However, where Jaguar kicks it into another gear is with SIMD/AVX and where it's various technologies come into play like branch tree prediction which gives Jaguar a massive efficiency edge.

Yep, with heavy-branching scenario's, Espresso should punch well above it's weight thanks to that stupidly short pipeline.

The Espresso is 14.8 GFlops for single precision floats and 17.4 GFlops for double precision. The GPU is 11.9x faster in single precision and it most likely does not support double precision going by AMD's past tradition of disabling that capablity for lower end Terascale parts ...

X1's cat cores are capable of 112 GFlops for single precision and 66 GFlops for double precision. It's GPU has 1.3 TFlops of single precision performance so it's 11.6x faster than the CPU in that aspect and 81 GFlops when it comes to double precision ... 

Pemalite said:

Geometrically games are still simple and would still benefit from a simple tessellator such as "Truform".

Between the Geforce FX and Geforce 200 series (2003 and 2008 respectively. - 5 year gap) hardware only increased in geometry performance by about 3x, where-as shader performance increased by 150x.
Now the Geforce 400 series (Released in 2010, 2 years later.) boosted that by 8x and continued to increase from there.

Kinda' puts things in perspective.

I wouldn't say that games are geometrically simple anymore with games approaching 16 pixels per quad and the new game Dreams by Media Molecule shows that MICROPOLYGN rendering is viable is on current gen consoles! 

The bottlenecks of games from last generation do not apply to this generation. Shader programs these days aren't trivial when compared to the 6th generation. When your shading twice the amount of geometry then you will most likely need double the shading power and the Latte is already starved for shading power as it is so how exactly would the tessellator go about alleviating that ? 

It's not just about the tessellator! You should consider EVERYTHING in the system ... 

Pemalite said:

I actually did run some benchmarks on a Radeon 6450, 6570 on vgchartz at one point comparing geometry and general image quality of those cards, of course you couldn't have Tessellation dialed up with 1440P with max everything with 60fps, but there was still a decent marked increase in image quality if you kept things on Medium, 720P with 30fps.

I do agree that the jump isn't massive over the HD Twins, but unlike the Wii U you can see newer and more effects in a few games, where-as you almost needed a magnifying glass to tell the difference between the Playstation 3/Xbox 360.

An HD 6450 in Unigine Heaven pulled off 8 fps at 1200x800 on low settings. There's no way that turd can keep 30 fps on medium at 720p ...

What can the WII U do that the HD twins couldn't at near same performance ? 



Voice-of-truth said:
bonzobanana said:


There are benchmarks for the cpu and the wii u comes out very badly compared to 360 and PS3 as for the gpu you have written much text on what the wii u gpu has so surely you know the exact gflops performance to put us out of our misery. All I can see online is its a radeon but its full feature set and  performance is unknown however its on a poor fabrication process with many other parts integrated on the same silicon (including the wii gpu) and takes an absolutely tiny amount of power despite being a power hungry cheap fabrication process. The best arguments I've seen have put the gpu at 176 gflops when everything is taken into account and allowing for a few generations of improvement over the 360 gpu that is exactly how it is performing. However I think we all agree this is still a big improvement on 360 and PS3 but you are still left with the huge issue that the wii u has limited cpu processing and poor main memory bandwidth. 

 

"The best arguments I've seen have put the gpu at 176 gflops"

Amazing how many times you repeat this hot garbage.

It's amazing how people like you are still pretending somehow the wii u is competitive in performance when it fails so badly even against the outgoing 360 and PS3.  

I was only playing mario kart wii u yesterday and thinking how on earth can anyone think  those graphics are cutting edge. Basic cartoon graphics and horrible aliasing everywhere on heavily repeated textures. Still artistically a beautiful game but clearly running on weak hardware and that is one of the wii u's best games visually.

When finally someone gives a gflops figure for the wii u gpu people like you are going to look even more stupid than you do today. However I doubt you'll ever realise it yourself because unbelievably people like you are still pretending the wii u isn't incredibly weak in cpu performance.

All you need is eyes to see how the wii u performs. Sick to death of reading all the pathetic excuses of why almost every wii u game technically is weak. The conspiracy theories, lazy programmers and other nonsense.

The wii u is what it is, a cheap design using cheap components only capable of modest performance. There is nothing complicated to understand.



sc94597 said:

1. "Removing bottlenecks" - bottlenecks are usually the type of things that if you don't have enough power they limit your system, but the difference to "just enough" could be quite marginal cost vs. a large benefit you gain. The performance gains, when bottlenecks are in the system are not linear/scale evenly by percentage. I am sure you know all of this.

An example in PC gaming:

Notice the marginal differences in computing frequency and/or threads lead to a significant stability in frame-time for the Witcher 3 and then after that it is flat, and before that it slopes somewhat linearly. If you have a 1.3 GHZ 4 threaded CPU you will see gains by increasing that to 1.5 GHZ (+15%), but after that you see no gains. That is what I meant by, 10-15% could be enough to resolve cpu bottlenecks in certain applications.

2. Most advances in CPU IPC from generation to generation (for the same clock-speed) today are quite miniscule. Look at Haswell Refresh -> Skylake. That is what I meant by 10-15% as quite significant.

3. Possibly since they are going to finally ditch the old GameCube architecture they will make wiser decisions this time around. They could increase performance even if they go with AMD. There are low-profile GPU chips that will let them match XBO/PS4 performance for a pricepoint of $300, without leaving their power consumption requirements, and I am sure there will be more options for when they manufacture such a console (probably won't come until the end of 2016/early 2017.)

The Witcher 3 is a fairly CPU-agnostic game maximized to run on four threads. What I don't get is why you think it would be particularly relevant to console gaming. Do you think frame time and frame rate are particularly gimped by the Bulldozer architecture when most games are running at 30 fps, 60 fps tops? I haven't seen many benchmarks, but it can't be that relevant next to, say, bottlenecks on the GPU. Otherwise the engineers at Sony and Microsoft would have opted for faster CPUs and slower GPUs, if they were working strictly within the limits of a given TDP.

Yes, it's quite a relevant jump. But again, not very relevant for gaming. I run an FX 8350 with a GTX 970. Run an i7 6700K (>50% IPC difference) with a GTX 960 and you'll still be way behind on frame rate on almost every single game. The lower the framerate, the larger the gap in my favour.

As for matching XBO/PS4 performance without increasing their power consumption requirements... eh. The Wii-U/Wii/Gamecube ran on 20-35 Watts. That's extremely low for gaming. Even in 2016 the 15-35W Carrizo APUs won't come lose to X1/PS4, and they'll still be 28nm. That is assuming Nintendo will opt for very fresh hardware this time around, for the first time since the N64.