| freedquaker said: You know what, I have no objection to any of that. What you are missing is that a) Those games are not taking advantage of close to the metal programming, and hampered by the high level access of DirectX and Open GL b) Consoles can utilize CPUs much more efficiently with much faster CPU calss etc. This doesn't mean they'll magically have more CPU muscle but it means the CPU is less of a bottleneck and needed way less c) Consoles are designed parallel this time around, and it will be taken advantage of, so the single thread performance is not the case here anymore. There is a reason why 8 cores have been in there. d) Weak CPUs have always been the case in modern Consoles, and their makers gotta be real idiots to put them there otherwise. Again this doesn't mean that on occasion, they'd benefit from faster CPUs but obviously the added performance is not worth it and better spent aelsewhere. e) For years, I have hardly ever heard developers complaining about the lack of CPU performance (with the exception of Wii). The main culprit of complaint has always been the amount of memory, which is now handled handsomely.
It's time to surface up and face the realities of the actual life, rather than diving into some unrealistic technicalities which hardly make practical differences. |
Well if you are happy with games that have sevearly limited simulation complexity sure a round of anemic CPUs is fine. Developers will always design around the bottlenecks of a system and having a weak CPU means designing games which only offer superficial levels of simulation. Basically you will get games that are designed around a weak CPU, so we will get open world games where nothing is simulated but the things you are looking at and everything else just pops out of existance, linear cinimatic experiances and simple arena shooters because that is what devs can do with a weak CPU and a reasonable GPU. Personally I would have liked more games where the game worlds were more complex and persistant. Having lots of RAM is nice but data is useless if you can't proccess it, now don't get me wrong I like high res textures as much as then next guy but if that is all devs can use it for because they don't have much CPU power but plenty of GPU power than that is dissapointing. And before you say it I know GPGPU will allow the GPUs to take over some of the simpler simulation work like particle/cloth physics and maybe some pathfinding and hit detection but it has it's limitations.
A few other notes, while the PS360's GPUs were very limited in terms of IPC compared to PC CPUs of the time (honestly in terms of IPC they were closer to the NetBurst than anything IIRC) but they actually held their own in terms of FLOPS. Especially the CELL, which actually held it's own even against top end CPUs at the time. Which is why PS3s were used as cheap supercomputers, and why it contributed so much to Folding@Home back in the day. No one is doing to the metal programming for the CPU in games in this day and age. Engines are written in C++ for the most part on all platforms, and actual game code is done in high level scripting languages for the most part. As for API overhead that does have in impact sure but games can still be CPU heavy for example

Non-Rendered Civ V AI benchmark should have next to 0 API overhead, A10 gets absolutly crushed.
And this is why you haven't herd complaints about the CPUs in the PS360
"
The first point relates to all of the things that are usually handled by the CPU and the second point relates to things that are traditionally processed by the GPU. Over the successive platform generations the underlying technology has changed, with each generation throwing up its own unique blend of issues:
- Gen1: The original PlayStation had an underpowered CPU and could draw a small number of simple shaded objects.
- Gen2: PlayStation 2 had a relatively underpowered CPU but could fill the standard-definition screen with tens of thousands of transparent triangles.
- Gen3: Xbox 360 and PlayStation 3 had the move to high definition to contend with, but while the CPUs (especially the SPUs) were fast, the GPUs were underpowered in terms of supporting HD resolutions with the kind of effects we wanted to produce.
In all of these generations it was difficult to maintain a steady frame-rate as the amount happening on-screen would cause either the CPU or GPU to be a bottleneck and the game would drop frames. The way that most developers addressed these issues was to alter the way that games appeared, or played, to compensate for the lack of power in one area or another and maintain the all-important frame-rate."
on the PS4/XBOne
"Removing these "bubbles" in the CPU pipeline combined with removing some nasty previous-gen issues like load-hit stores means that the CPUs Instruction Per Cycle (IPC) count will be much higher. A higher IPC number means that the CPU is effectively doing more work for a given clock cycle, so it doesn't need to run as fast to do the same amount of work as a previous generation CPU. But let's not kid ourselves here - both of the new consoles are effectively matching low-power CPUs with desktop-class graphics cores.
So how will all of this impact the first games for the new consoles? Well, I think that the first round of games will likely be trying to be graphically impressive (it is "next-gen" after all) but in some cases, this might be at the expense of game complexity. The initial difficulty is going to be using the CPU power effectively to prevent simulation frame drops and until studios actually work out how best to use these new machines, the games won't excel. They will need to start finding that sweet spot where they have a balanced game engine that can support the required game complexity across all target consoles. This applies equally to both Xbox One and PlayStation 4, though the balance points will be different, just as they are with 360 and PS3."
@TheVoxelman on twitter







