caffeinade said:
JEMC said:
You're just lucky .
|
vivster said:
Having seen the presentation I'm pretty sure these graphs are with RT enabled. Which means abysmal framerates for the 1080 and 50% better than abysmal for the new card. The new cards have about 20% more CUDA cores and about the same clock. Where do you expect the 50% to come from? I'm expecting 20% on average.
|
Um no. It is highly like that these graphs aren't with RT enabled. Pascal cannot keep pace with Turing in that aspect. Volta can't keep up with Turing in that kind of a workload. If it takes four V100s to lag behind a single Turing card, then a single Pascal card won't be close. https://s22.q4cdn.com/364334381/files/doc_presentations/2018/08/JHH_Gamescom_FINAL_PRESENTED.PDF Go to page fourteen. Pascal wishes it could keep up. When Turing is getting like 30FPS in Tomb Raider, Pascal is getting something in the single digits. Not even the most hardcore console players would consider that playable.
The new cards have a decent increase to their bandwidth too, so lets not forget about that. That's going to make a pretty big difference when trying to render at high resolutions and stuff. So lets not forget that.
Turing seems to be quite the departure from Pascal. Nvidia has apparently reworked their SMs for Turing, so core for core, Turing should probably come out ahead. They seemed quite proud to talk about how their GPUs can do 32bit int and floats at the same time now, so hopefully that results in a pretty decent performance boost, when compared to Pascal cards with the same FLOPS.
Whilst I'm not sure if we'll get support for it in the consumer cards, Turing does support something like AMD's Rapid Packed Math (where you do two FP16 operations in the same time it takes to do one FP32); Intel GPUs support this, the Switch supports it, the PS4Pro supports it, and Vega supports it. Assuming Turing does support it on the consumer side, these GPUs should be able to take advantage of the optimisations already done for the other devices. Games such as Wolfenstein II and Far Cry V already support it, so maybe those games will see an extra percent or five of performance.
There's other changes too, but Turing seems to be more than Pascal with some RT and Tensor cores strapped to it.
|
My comment was a joke, obviously, but given that you've taking it so seriously, can I just point out that even in those Nvidia made benchmarks, not even half of those games see a 50% increase in performance?
Only the Infiltrator demo, Wolfenstein II and Shadow of War are above the 1.5x mark, with FF XV, PUBG, ARK, Hitman 2 and Andromeda being close, but still falling short, to that 1.5x mark.
We'll see how the cards actually perform when benchmarked by independent reviewers.
Please excuse my bad English.
Former gaming PC: i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070
Current gaming PC: R5-7600, 32GB RAM 6000MT/s (CL30) and a RX 9060XT 16GB
Steam / Live / NNID : jonxiquet Add me if you want, but I'm a single player gamer.