| CrazyGPU said: Radeon R9 Fury X (275W 28 nm) lauched jun 2015 and it was a 8.6 teraflop graphic card. 512GB/s bandwith. Radeon RX vega air(295W 14 nm FinFet) launched Aug 2017 and it was a 12.7 teraflop graphic card. 484 GB/s bandwith, but utilize beter compression tech. So, two years and two months, and AMD could improve flops by 47%, 26 months later with the new architecture. Now, PS4 pro lauched nov 2016 with 4,2 Teraflops. Let say PS5 launches nov 2020. 48 months later. If AMD is able to keep its pace in performance improvements, and that gets harder and harder, and launchs a 7 nm+ gpu, we can expect: 48 months x 47%(amd improvement) /26 months = 86.7% more performance. 4.2 Tf PS4 pro x 1.867 = 7,84 teraflops. (a little more than 4 x standar PS4) But, let say Sony try harder, with better cooling, like with Msoft One X. One X lauched Nov 2017. It´s a 6 teraflops machine. Again let say PS5 launches 2020. 3 years later. 36 months x 47%(amd improvement) / 26 months = 65% 6 Tf XBoX X x 1.65 = 9,9 teraflops. So I really don´t understand on what basis one would expect 15 Teraflops on an APU!. I said I´m expecting 10-12 Teraflops for 2020 and I´m being optimistic. of course that performance can go higher if they go discrete.
Edit: I was checking Radeon RX Vega 56 Specs 10.5 Teraflops. 410 GB/s Bandwith. 210 W. Lauched August 2017. Mostly a 4k 30fps graphic card. Now, XBox one X peaks 172 W in Gears of War 4. https://www.anandtech.com/show/11992/the-xbox-one-x-review/6 Take away CPU side consumption, fans, disks, and what´s for the GPU side of the apu? 100w? Now if they can reduce Vega consumption by half in 3 years, you can have a 10 Teraflop GPU on PS5. Or Sony can go the Nvidia route, which is much more complicated.
|
Thats because you are basing your calculations off the wrong constants. Look at it this way instead.
Forget whatever one GPU or the other has achieved and how, instead look at the individual compute units and their clocks.
So start with the PS4 OG.
PS4OG = 20CU (2 deactivated)@800mhz = 1.8TF (28nm)
PS4pro = 40CU (4 deactivated)@911mhz = 4.2TF (16nm)
What does that tell you? Exactly doubling the GPU Compute units should theoretically give you 3.6TF. Assuming the clocks stays identical. However, because the GPU has been upclocked from 800mhz to 911mhz that TF value went up from 3.6TF to 4.2TF. And this is before any architecture inprovements have been taken into account.
Now lets use the XB1X as the base.
XB1X = 44CU (4 deactivated)@1172mhz = 6TF (16nm)
PS5/XB2 = 88CU (8 deactivated) @1172 = 12TF (7nm)
Now I am even lowballing this, because
- one constant going from higher to lower fabrication processes is being able to clock higher because of better thermal efficiency. So at the very least the next gen GPUs should be clocked higher than the 1172mhz seen in the XB1X.
- I am assuming that as much as 8 CUs will be deactivated to improved yields.
- We are not taking any architectural design improvements into account.
Last edited by Intrinsic - on 21 February 2018







