Biggerboat1 said:
I'm basing my thoughts on the rumoured leaks of Lockhart & Anaconda sharing the same CPU, having 12GB vs 16GB of ram & of course, the 4 vs 12 TFLOP GPUs. So I'm not sure why you're alluding to a weaker CPU - have there been new leaks?
I'm admittedly a novice in tech specs but isn't resolution pretty much a GPU related matter? I.e. the geometry, ai, etc. are the same amount of work for the CPU, regardless of whether the GPU is rendering in 1080 or 4K? Frame-rate is different & effects both.
The idea of balancing your CPU and GPU concerns bottlingnecking, or when one component is preventing another component from performing to its full potential. For example, if you have an incredibly high-powered graphics card and a mediocre CPU, then the graphics card could be finishing its work faster than the CPU can accept that work and issue the GPU more. At this point, even if you install a better graphics card, your computer's performance isn't going to improve because your CPU is at the limit of what it can do with graphics cards. The same applies to having an incredibly high-powered CPU that starts issuing tasks to the GPU faster than the graphics card can handle them.
Isn't that essentially what the Switch is doing - it has a down-clocked GPU in handheld mode so reduces resolution but otherwise everything else is the same?
I accept I could be wrong here though and will happily stand to be corrected.
Bottlenecking refers to a limitation of some sort, especially as caused by hardware. When you're playing a game, there's two common bottlenecks (or limitations) to your framerates — the CPU or GPU. With the GPU so commonly being said as the most important to gamers, of course you don't want it to be held back, right?
In order to render and display an image to your screen, there are many steps taken to do so. The GPU does much of the work in order to do that. But first, it needs to be told what to do and it needs the required data to work with to do its job in the first place.
At the CPU, API calls are executed. Control is given to the OS and then GPU drivers, which translate the API calls to commands. These commands are sent to the GPU, where they are in a buffer (which there may be multiple of in modern graphics APIs) where they will then be read and executed (in other words, carried out). Even before this can be done, however, there's even more work — the CPU also has to run logic necessary to tell what needs to be rendered on screen, and this is based off user input and internal rules. On top of sending commands to the GPU with instructions, data, and state changes, the CPU also handles things like user input, AI, physics, and environment in games. Meanwhile, the GPU is tasked with, as GamsersNexus puts it concisely, "drawing the triangles and geometry, textures, rendering lighting, post-processing effects, and dispatching the packaged frame to the display."
Now, here's where bottlenecking comes in. If the CPU isn't sending commands faster than the GPU can pull them out of the command buffer and execute them, the buffer will spend time being empty with the GPU waiting for input, and you're considered to be CPU limited. If the GPU isn't executing the commands fast enough, then you're GPU limited, and the CPU will spend time waiting on the GPU.[source] When you're CPU-limited (also called CPU-bound or CPU-bottlenecked), GPU utilization (time spent not being idle) decreases as the bottleneck becomes more severe and when you're GPU-limited (AKA GPU-bound or GPU-bottlenecked), your CPU utilization will go down to an extent as the bottleneck becomes more severe.
In an ideal world, there would be no bottlenecks. In this case, such a situation would require that the CPU, PCI-e, and every stage in the GPU's pipeline all be equally loaded. Or, every component would have to be infinitely fast. But, this is not an ideal world. Something, somewhere, always holds back performance. This doesn't just go for the CPU and GPU, either.
https://pcpartpicker.com/forums/topic/214851-on-cpugpu-bottlenecking-in-games
MS knows one of the mistakes they made with XB1 was to fall below PS5 on performance so it seems to me that it would be imperative for them not to hobble their top sku by creating a poorly thought out entry sku that would meaningfully limit the technical scope of games.
I guess my question would be - are you saying that creating a 1080 & 4K sku is technically impossible without hobbling the the latter sku? I think that if executed badly it could but I don't see why it's technically impossible... or even necessarily that difficult as long as they don't cheap out on the other components (which isn't the case, based on the these rumoured specs).
|