By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Bofferbrauer2 said:
vivster said:

I think the judges aren't in for that yet. The things I've read is that a 2080ti barely uses 8x. I will certainly wait for Ampere benchmarks before I make a decision on that.

Here is a nice test

https://www.igorslab.de/en/pcie-4-0-and-pcie-3-0-different-between-x8-andx16-with-the-fast-fastest-cards-where-the-bottle-neck-begins/

Overall there is no noticeable difference between PCIe 3 x8 and x16 on the highest end GPUs with heavy loads. Also the required bandwidth seems to go down with higher resolution, that's probably because of the lower fps.

If you think the high resolutions are the heavy loads, think again. All it does is bringing you into a more CPU-limiting scenario, hence why the gap drops.

The 720p resolution it thus more representative for what to come, and you can see that it lags by 6.3% in average and 7.4% in minimum FPS already.

But this is not just about FPS: A chip as huge as a 3080, 3090 or Big Navi (with 80 CU) will also make tons of drawcalls, which will increase the overhead a lot. And if you look at the frametimes, you'll see that the x8 will have more and higher spikes than the x16 in the frametimes, resulting in microstutter which the raw FPS numbers won't show. Future games will make better use of more cores, and thus will only accentuate the problem further.

I get what you are saying but how do heavy loads with high resolution bring you closer to a CPU limit when they are clearly GPU limited? Surely the lower gap is because of the lower framerate, resulting in fewer calls.

The higher frametimes are a good call, I'm very sensitive to micro stutter.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.