By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Nate4Drake said:
Biggerboat1 said:

I'm basing my thoughts on the rumoured leaks of Lockhart & Anaconda sharing the same CPU, having 12GB vs 16GB of ram & of course, the 4 vs 12 TFLOP GPUs. So I'm not sure why you're alluding to a weaker CPU - have there been new leaks?

I'm admittedly a novice in tech specs but isn't resolution pretty much a GPU related matter? I.e. the geometry, ai, etc. are the same amount of work for the CPU, regardless of whether the GPU is rendering in 1080 or 4K? Frame-rate is different & effects both.

The idea of balancing your CPU and GPU concerns bottlingnecking, or when one component is preventing another component from performing to its full potential. For example, if you have an incredibly high-powered graphics card and a mediocre CPU, then the graphics card could be finishing its work faster than the CPU can accept that work and issue the GPU more. At this point, even if you install a better graphics card, your computer's performance isn't going to improve because your CPU is at the limit of what it can do with graphics cards. The same applies to having an incredibly high-powered CPU that starts issuing tasks to the GPU faster than the graphics card can handle them.

Isn't that essentially what the Switch is doing - it has a down-clocked GPU in handheld mode so reduces resolution but otherwise everything else is the same?

I accept I could be wrong here though and will happily stand to be corrected.

Bottlenecking refers to a limitation of some sort, especially as caused by hardware. When you're playing a game, there's two common bottlenecks (or limitations) to your framerates — the CPU or GPU. With the GPU so commonly being said as the most important to gamers, of course you don't want it to be held back, right?

In order to render and display an image to your screen, there are many steps taken to do so. The GPU does much of the work in order to do that. But first, it needs to be told what to do and it needs the required data to work with to do its job in the first place.

At the CPU, API calls are executed. Control is given to the OS and then GPU drivers, which translate the API calls to commands. These commands are sent to the GPU, where they are in a buffer (which there may be multiple of in modern graphics APIs) where they will then be read and executed (in other words, carried out). Even before this can be done, however, there's even more work — the CPU also has to run logic necessary to tell what needs to be rendered on screen, and this is based off user input and internal rules. On top of sending commands to the GPU with instructions, data, and state changes, the CPU also handles things like user input, AI, physics, and environment in games. Meanwhile, the GPU is tasked with, as GamsersNexus puts it concisely, "drawing the triangles and geometry, textures, rendering lighting, post-processing effects, and dispatching the packaged frame to the display."

Now, here's where bottlenecking comes in. If the CPU isn't sending commands faster than the GPU can pull them out of the command buffer and execute them, the buffer will spend time being empty with the GPU waiting for input, and you're considered to be CPU limited. If the GPU isn't executing the commands fast enough, then you're GPU limited, and the CPU will spend time waiting on the GPU.[source] When you're CPU-limited (also called CPU-bound or CPU-bottlenecked), GPU utilization (time spent not being idle) decreases as the bottleneck becomes more severe and when you're GPU-limited (AKA GPU-bound or GPU-bottlenecked), your CPU utilization will go down to an extent as the bottleneck becomes more severe.

In an ideal world, there would be no bottlenecks. In this case, such a situation would require that the CPU, PCI-e, and every stage in the GPU's pipeline all be equally loaded. Or, every component would have to be infinitely fast. But, this is not an ideal world. Something, somewhere, always holds back performance. This doesn't just go for the CPU and GPU, either.

https://pcpartpicker.com/forums/topic/214851-on-cpugpu-bottlenecking-in-games

MS knows one of the mistakes they made with XB1 was to fall below PS5 on performance so it seems to me that it would be imperative for them not to hobble their top sku by creating a poorly thought out entry sku that would meaningfully limit the technical scope of games.

I guess my question would be - are you saying that creating a 1080 & 4K sku is technically impossible without hobbling the the latter sku? I think that if executed badly it could but I don't see why it's technically impossible... or even necessarily that difficult as long as they don't cheap out on the other components (which isn't the case, based on the these rumoured specs).

It's technically possible to squeeze both SKUs, but it would require much more extra work, and Devs shouldn't develop and conceive the Game with the lowest hardware in mind; the results could be theoretically, in some cases, that you might have, according to the specific vision of developers for that particular game,  a more advanced game in areas such as physics, AI, animations, etc, apart from the given better graphics and performance on the Elite SKU.   Is it feasible ? Is it fair for the majority of gamers who will buy the cheapest SKU?

 Now, I'm not a tech guru either, and this is just according to my knowledge, and I was always wondering how scalability can work in areas such as physics, animations, system collision, interactions with the environment, AI and game-play mechanics ? How much more complex is "scalability" in those areas ?  Can this be taken into account by the developers, is it feasible, or too complex and costy for the majority of developers ?  And This also depends on how devs decide to allocate the extra power of the more powerful CPU.  

You also forgot RAM, on size and bandwidth that are necessary to keep the feeding between both.

There is no reason for a system that have a GPU that is 4x stronger than the other to have all other components the same. If you want to have both balanced architetures, CPU, RAM size and bandwidth will accompany. So there is no reason to say a system that have a GPU 4x stronger isn't a system "overall about 4x as strong".

On the bottleneck. The ideal is that the whole system struggle at the same time, so there is no excess and no lacking in specific components. Theoretically that is what Sony done on PS4, so when they just doubled GPU with minimal improvement on CPU and RAM they couldn't really make full use of the GPU, with most of the power used to just have higher res or minimal performance gain. They couldn't increase RAM and CPU much because it also wouldn't help much (besides cost and limits on architecture and full compatibility).



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."