By using this site, you agree to our Privacy Policy and our Terms of Use. Close
vivster said:
Bofferbrauer2 said:

Would be true if you could upgrade the GPU. But if RTX 2060 is the limit for PCIe 3 x8, then a 2080ti is the limit for x16. Ampere will need PCIe 4 to get it's power to the ground, something NVidia has already pointed at.

So if you can't upgrade the GPU to make use of the extra power, then you can just as well settle for a 10600K. Or go AMD to be able to use PCIe 4 and thus full power on the next gen GPUs.

I think the judges aren't in for that yet. The things I've read is that a 2080ti barely uses 8x. I will certainly wait for Ampere benchmarks before I make a decision on that.

Here is a nice test

https://www.igorslab.de/en/pcie-4-0-and-pcie-3-0-different-between-x8-andx16-with-the-fast-fastest-cards-where-the-bottle-neck-begins/

Overall there is no noticeable difference between PCIe 3 x8 and x16 on the highest end GPUs with heavy loads. Also the required bandwidth seems to go down with higher resolution, that's probably because of the lower fps.

If you think the high resolutions are the heavy loads, think again. All it does is bringing you into a more CPU-limiting scenario, hence why the gap drops.

The 720p resolution it thus more representative for what to come, and you can see that it lags by 6.3% in average and 7.4% in minimum FPS already.

But this is not just about FPS: A chip as huge as a 3080, 3090 or Big Navi (with 80 CU) will also make tons of drawcalls, which will increase the overhead a lot. And if you look at the frametimes, you'll see that the x8 will have more and higher spikes than the x16 in the frametimes, resulting in microstutter which the raw FPS numbers won't show. Future games will make better use of more cores, and thus will only accentuate the problem further.