Really wondered how it was that you were still posting, as I thought Australia already had mobilized every firefighter against the bushfires. Kudos to you for doing what you are doing and stay safe out there!
They do rotating deployments.
So we get sent to the fire ground for 5 days, then sent back home for 2 days rest, then sent out again. - Fly back out tomorrow.
Does mean I have very limited ability to be on the forums for now, but the fire season won't last forever.
These Ryzen 4000 mobile chips go into laptops with way worse cooling compared to desktops and consoles. Now some high end gaming laptops have decent cooling, but it's still not anywhere near what the consoles will able to provide in terms of cooling performance. This will be a major benefit to the consoles. It's also why we've been constantly seeing the leaks of 3.2GHz for the CPU. Instead of 2.9GHz like in the laptops, it'll probably be 3.2GHz, with up to 4.2GHz boost, which will be sustained for much much longer because of the console cooling capability. Especially in the XBSX based on it's design.
We'll see with the PS5, but the dev leaks have said the devkit it another jet like PS4. Now I've also seen a few suggestions based on leaks, that the 36CU GPU at 2.0GHz is actually the RDNA 5700 Series clocked up as high as it will go with extra cooling capabilities, and is only for late stage dev kits. The thought process is just to give devs RDNA hardware asap to get them as far ahead as possible for the 2020 launch, while the real GPU arch get's finished for final dev kits later on. I wouldn't be surprised if this is true because 36CU's seems like a 2019 launch choice.
It's not unusual for final consoles to have final hardware deviate from the early pre-launch console dev-kits anyway.
Basically the current devkits are just performance targeted machines, they take current technology, that regardless of cost will be "close enough" to next gen hardware for the dev kits in terms of performance, it does mean that a few features may be omitted because of it. (I.E. Ray Tracing.)
I thus believe that the Vega in the 4000 APUs are actually closer to Navi, just don't go all the way and are thus still called Vega.
And the reason why the amount of CU got dropped to 8 could also be in part simply because the bandwidth limitations are choking bigger chips anyway, so only putting 8 of then onto the chip makes them significantly smaller without hampering the graphics performance much.
Nah. Still purely Vega.
AMD's decision to cut back on CU's was actually a cost-saving measure, by cutting back on CU's, but clocking the Graphics Higher... AMD was able to reduce the amount of transistors invested into graphics, but was still able to increase performance by 50% or more.
That then gave AMD the freedom to invest more transistors into the CPU portion of the chip, there-by giving us 8-CPU cores.
The older Vega chips in the Ryzen 2000 and 3000 APU's were certainly bandwidth starved, but they were also TDP starved, you could often increase performance rather significantly if you kept the CPU cores throttled down low so that the APU funneled more TDP into the GPU portion of the chip.
And the reason why AMD opted for Vega over Navi for it's 4000 series was also a cost factor, Navi requires more transistors than Vega for every CU... And Vega is extremely energy efficient at lower clockrates... I mean, feature wise it's a dud chip as AMD's Draw Stream Binning Rasterizer and Primitive Shaders were pretty half-arsed implemented anyway.
So it is eligible to be on PS5/XSX =]
Can't help but wonder how much Vega has actually changed then. Why not give it a different name? In terms of marketing, having a new GCN arch that's much more capable than Vega should be more enticing to buyers than simply saying we improved Vega quite a bit.
Didn't think of bandwidth starvation. Good point. AMD surely can sell as many APU's and chiplets as they can make, so the smaller they are the better. The only buyers who lose are any who want a no compromise 1080p/60 APU, but it seems like AMD doesn't want to cross that line, at least not yet.
Vega was always a little different to the desktop variants anyway, especially in the video encode/decode department.
It's not a big deviation.
There are two ways to drive performance on a GPU with the same architecture... Increase clockrates or build a larger chip and make everything wider.
AMD opted to increase clockrates this time around.
Take an 8CU part and clock it at 1Ghz. That is 1 Teraflop of single precision capability.
Or take that 8CU and cut it in half with 4CU's and clock it at 2Ghz. That is also 1 Teraflop of single precision capability... But the GPU takes up significantly less space.
Clockspeed tends to have another advantage though, it makes the *entire* GPU run faster, so if you were ROP limited before, then the 4CU part could offer a significant improvement on that front for example,