By using this site, you agree to our Privacy Policy and our Terms of Use. Close

16TF Navi GPU and 8-core Ryzen and 1TB SSD in a $399-449 2020 console? Some of you are saying this isn't good enough? Not sure if serious or....

For starters, Vega 10 is rumored to clock in at ~ 1525 MHz, delivering 12.5Tflops in a 225-250W TDP power envelope. Vega 10 will be flagship AMD card in 2017, which implies $549-649 MSRP. The GPU inside OG PS4 was a cut-down HD7850/7870, which by November 2013 cost less than $179 USD:
http://www.anandtech.com/show/7503/the-amd-radeon-r9-270x-270-review-feat-asus-his

Vega 20 is supposed to be a 7nm die shrink of Vega 10, inside 150W TDP. Navi might hit 18 Tflops by 2019, but it will cost $550-650 and have 225-250W TDP on 7nm. Based on power usage alone, the idea that PS5 will have a 16Tflops Navi on 7nm is a nice day dream.

Then we get to the cost. Even if everything goes smoothly, a $650 2019 Navi would still cost $350 in retail by 2020. The GPU inside PS4 Pro is barely at a level of a 170 RX470 8GB and is slower than a $200 RX 480 8GB. Similar to OG PS4 or PS4 Pro, Sony used a GPU that cost roughly $169-179 in retail. So how in the world would a 16-18Tflops Navi with 16GB of HBM2 cost just $170-180 by 2020?

We didn't even get to the price of 8-core Ryzen CPU or M.2 SSD.

Fact is, even an 8-10Tflops PS5 would be a gigantic leap at the $399 price level in 2020. Until MS and Sony drop PS4/XB1 as the base consoles, the 4.2-6Tflops specs of the Pro and Scorpio are nice marketing gimmicks. To truly see how much potential an 8-10Tflops PS5 has, games would need to be made from the ground-up for PS5 or more powerful PC hardware of 2019-2020.

Not to mention how misleading the Tflop metric itself is. 6.5Tflops GTX1070 is faster than the 8.6Tflops Fury X [Tflops is just a measure of arithmetic logic unit of a GPU, but GPUs are often bottlenecked by other specs), while games like Uncharted 4: Lost Legacy, The Last of Us 2 look better than 90% of PC games. What difference does it make if Titan XP has almost 12Tflops of compute power in 2016 when Uncharted 4 Lost Legacy and BF:1 look excellent on a 1.84Tflop PS4? The extra power of modern hardware is used for higher resolutions and FPS, but not so much for next generation graphical fidelity. Then how are some of you judging that 8-10Tflops isn't a big enough leap when there is NOT 1 game in the world that was made from the ground-up to fully take advantage of an 8-10Tflops GPU?

Let's look at the facts again. GTX580 had about 1.58Tflops of compute power in late 2010. There were 0 games that looked anywhere as good as Uncharted 4, Horizon Zero Dawn, Rise of the Tomb Raider, BF1, SW:BF, Dying Light, Resident Evil 7, Forza Horizon 3, Doom, etc. (maybe Crysis 1 or Skyrim/Oblivion modded) back then. Even if GTX580 could hypothetically run these games, they didn't exist. Point is, 2020-2022 next gen games would look a lot better on a 8-10Tflops PS5 than they do now on an 11.6Tflops TXP/Vega. Could Vega run some of those 2020 games? Of course, but that's not the point. The games launching in 2017 will not be targeting 1080/1080Ti/TXP/Vega, but much, much weaker GPUs. That's why claiming that 8-10Tflops PS5 isn't a big enough leap is an absurd claim. As I said earlier, NCU or Vega/Navi may have superior IPC/efficiency per Tflop than Pitcairn/Polaris 10.

Polaris 10 already has 18-40% greater IPC than GCN1.0/1.1, depending on the modern AAA game, level of tessellation used, etc. Therefore, even as a starting point it will be 100% wrong to simply divide a 10Tflops 2019-2020 AMD NCU-GCN GPU / 1.84TFlops Pitcairn GCN1.0/1.1 to derive a performance relationship. With only 32 ROPs, 144TMUs and 256-bit memory bus, RX480 is often trading performance with 64 ROP, 160-176TMUs, 64 ROP R9 390/390X. If those metric system mean nothing to you, why are you guys insist on using ALU/shader bound performance to judge GPUs? Even the memory bandwidth cannot be compared since OG PS4 lacks delta color compression. Therefore, it will also be 100% inaccurate to take 512GB/Sec memory bandwidth and simply divide it by 176GB/sec of the PS4. 

Scott Wasson of AMD already discussed how certain features of Vega could provide a significant performance boost over Fury X but only if programmers specifically took advantage of those new capabilities. How are some you extrapolating any hidden/incremental performance benefits of Vega/Navi/Navi+ GPU architectures when we haven't seen them in action? These additional benefits aren't explicitly baked in the Tflop measurements. 

Why don't we let PS5's games do the talking on whether 8-12Tflops GCN 5.0-6.0 is good enough of a leap or not?

Some astute PC gamers on this forum would tell you that TFlops performance cannot be directly compared. 1080 is roughly 21-23% faster in games over the 1070, but the former has approximately 37% higher Tflops rating. Tflops is an easy number to calculate but on its own in a vacuum, it's not necessarily sufficient to tell us how much better 2020-2022 PS5 games would look on a 8-10Tflop GPU.