bonzobanana said:
1. You seem to have a strong bias in favour of the Switch rather than a sit on the fence of looking at the evidence and how the Switch performs. Your bias seems to be negating all evidence that differs with what you want to believe.
2. Those mips/dmips figures are admittedly only a rough guide to comparing chips but you seem to be using that opportunity to believe it is the Switch under represented when its more likely the other way for reasons I've given previously. You would obviously understand the powerpc figures are pretty much inline with many other powerpc cpu's that are variants on the same processors in ps3 and 360. Just correlate them with other powerpc chips in that list of performance figures. There are no wild claims.
3. Dhrystone is just an integer performance check and those figures are mainly DMIPs anyway.
4. It's not like those PPC figures seem strange they are stating 2 mips per mhz per core on 360 where as wii was 2.3 mips per mhz which was out of order execution like ps3. So even with the dual thread on each core they are stating a low figure. It's only because there are 3 cores running at 3.2ghz that the final figure is so good. Basically 6 x 3200 approx which is 19,200. The cpu design of 360 seems to be designed around generating high floating point performance.
5. You look at LA Noire on Switch and you can clearly see a system with a CPU bottleneck in the design and this is hardly surprising with Nintendo.
6. Both the specification and real world performance support my view I believe but I'm happy to change my opinion if any evidence comes along to change it.
7. I was an early adopter of the wii u but it was clear when I started playing it there was a cpu issue and that was made clear when the actual specs were leaked. I just don't think Nintendo are to bothered about CPU performance. Surely they could have run the Tegra at full CPU speed if they wanted to but didn't.
|
1. You might have a point if I didn't provide evidence in the form of a. Dhrystone benchmark results, and b. the developer testimonial about relative real-world IPC of ARM/Jaguar vs. PPC. Furthermore you made certain assumptions like developers being able to take advantage of all threads of the Xenon for games equally which deserve to be critiqued. Compare it to PC gaming. In what games does one gain a huge benefit from using six threads over four threads? I can only think of simulation heavy games like Cities: Skylines, Total War, with Crysis 3 being the exception for an action game.
2. Real-world benchmarks are usually a good predictor of relative performance (as long as one contextualizes them in their assessment), but theoretical predictions are pretty useless without considering the architectural differences, that is why we've rejected this mips prediction (which is usually based on the best performing instruction set's performance) as being anything useful. Again, as permalite said,
"MIPS is only relevant if the CPU's being compared are of an identical architecture.
Nor is it representative of a complete processors capabilities anyway.
I mean... There are soundcards with 10,000+ MIPS, you aren't running Crysis on them."
3. Well yeah, when you have a dedicated and specialized thing called a GPU you're going to be mainly using that to compute flops where you can. Hence the usefulness of Dhrystone when it comes to measuring the viability of general purpose processing. The persons in the neogaf link I provided performed a more general benchmark which includes fp operations anyway, and the conclusion was pretty much the same.
4. But again you can't just add up the mips among the various cores and say: see it is more powerful because it has more mips. Most computations are going to be on two cores with each extra core having diminishing returns.
5. I look at LA Noire and see an unpolished game. There is no reason why frame-rate should affect game-play speed if the game were polished. The number of high quality Switch titles that reach solid 60fps with respect to PS360 titles tells us just as much about the relative CPU performance, unless one thinks the GPU is offsetting a lot of the CPU's tasks, which could very well be a possibility.
6. If so, then provide the evidence rather than speculation.
7. Who knows, they could always free up the fourth core once they optimize their OS further. But generally what you're seeing is not just a trend with Nintendo. The other platform-makers knew that by further dividing the processing between the CPU and GPU more efficiently they could achieve better results. It is why GPGPU was such a big catch word on gaming forums when the PS4/XBO released. When you have vector operations to compute it makes so much more sense to use a processor that has hundreds of weak cores versus one that has three or four strong ones. In Nintendo's case I suspect the Tegra was under-clocked for TDP/heat considerations.
Edit: By the way here is some more evidence about the DMIPS.
http://lowendmac.com/musings/05/0513.html
"Update: Under Linux, the Xbox 360 has Dhrystone benchmark scores roughly comparable to a 1.25 GHz G4 Mac mini (also running Linux). That's a lot less power than we ever would have expected from a triple-core 3.2 GHz PowerPC machine."
Last edited by sc94597 - on 28 January 2018