By using this site, you agree to our Privacy Policy and our Terms of Use. Close
shikamaru317 said:
Peh said:

You shouldn't use FLOPS as a performance measurement.

For example: RTX 2080 Ti and Radeon VII do both 13.4 TFLOPS at FP32 (float). Yet, the 2080 ti is at the minimum 30% faster in gaming performance.

5700 XT which has the current architectur (RDNA) probably used  in these console does even worse than the Radeon VII according to benchmarks.

I predict that the new consoles will be around 50% slower than what a current RTX 2080 ti can achieve at average.

Radeon 7 is last gen tech, GCN, Graphics Core Next. 5700XT and the next-gen consoles are using RDNA, not GCN. 5700 XT was not designed as a replacement for the Radeon 7, it was designed to sit one performance bracket down from the Radeon 7, it is only 10 tflop to the Radeon 7’s 13 tflop, that is the only reason why it is performing under the Radeon 7, and even then it still comes very close to Radeon 7 in some games like The Division 2 and Forza Horizon 4, which you can see in the pics I just linked. You can safely assume that when AMD releases a 13 tflop RDNA card, likely later this year, that it will beat the Radeon 7 by a decent margin, because RDNA is definitely a step up from GCN in terms of real world vs flops performance. 

50% weaker than 2080ti? I hardly think so, 12 tflop of RDNA should fall around 2080 tier in performance, and that is on PC. Console games get extra optimization compared PC games.Taking into account console optimization, multiplat games on XSX should perform about the same as they do on a PC running a 2080ti. Even this rumored 9.2 tflop PS5 would likely come within 25% of a 2080ti PC taking into account console optimization.

The initial point I was making is that FLOPS is a bad way of measuring performance where you also confirmed my statement by providing an example that Radeon VII and 5700 XT are identical in gaming performance. The difference between these 2 is around 3-4 TFLOPS at FP32. 

Let me show you what the base of my prediction is: 

First one: 

https://www.techpowerup.com/gpu-specs/radeon-rx-5700-xt.c3339

https://www.techpowerup.com/gpu-specs/geforce-rtx-2080-ti.c3305

Relative performance difference provided by this site is about 34%. 

Second: 

https://gpu.userbenchmark.com/Compare/Nvidia-RTX-2080-Ti-vs-AMD-RX-5700-XT/4027vs4045

Benchmark comparing these 2. Average user benchmark is at 72 % difference. 

XBOX Series X will be using a RDNA based architecture. RDNA 2.0 is scheduled to release this year 2020. Will RDNA 2.0 architecture find its way into XBOX Series X? I don't know.  This one also provides hardware based Raytracing. If it goes with an RDNA 2.0 than the difference could be smaller, but there is currently no data available that I could use. 

I read an article that they will enhance the RDNA 1.0 at some points for XBOX Series X. Sadly, I can't find that article anymore. 

Now, that's just GPU talk. We have to put that GPU into a gaming console and combine it with different costly devices. I assume that Xbox Series X will cost around 599€ at start. To achieve that price you will have to make certain sacrifices. One will be selling it at a loss, there other one is to reduce certain aspects of features or performances. 

Thus, I predict, but what is known to me and assumed by me, that the average performance of that console will be around 50% slower than a RTX 2080 Ti. 

I am comparing a graphics card against a full gaming console, where the card costs around 1000€ and that console around 600€.

Just add that the console also comes with a controller which is not free of costs to produce. 

Last edited by Peh - on 02 January 2020

Intel Core i7 8700K | 32 GB DDR 4 PC 3200 | ROG STRIX Z370-F Gaming | RTX 3090 FE| Crappy Monitor| HTC Vive Pro :3