By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - New AMD GPUs to launch July 7th.

LGBTDBZBBQ said:
LOL next-gen consoles, I thought it's going to be 13tf-15tf according to the insider leaks

Teraflops isn't everything.



--::{PC Gaming Master Race}::--

Around the Network
LGBTDBZBBQ said:
LOL next-gen consoles, I thought it's going to be 13tf-15tf according to the insider leaks

From users who are not very tech savvy, not just VGChartz. Already said it was never, ever happening, be prepared for another mid gen upgrade.



Pemalite said:

Really? From what I can tell AMD increased it to SIMD32 from SIMD16 on GCN and decreased wavefronts from 64 on GCN to 32.

So no longer does it take 4 cycles for a wavefront to be pushed through on GCN, it's now 1 on Navi, if they have a SIMD64 mode, that is news to me.

The Primitive Shaders are handled by the drivers compiler. I would assume developers have some aspects they can leverage from it.

From their prior high level block layout diagrams, physically speaking it looks like 4xSIMD16 vector units but GCN's programming model (most important aspect) is actually SIMD64 because the hardware itself executes a 64-wide wavefront width due to it's single scalar unit which handles very important operations such as control flow, branching, and addressing ... 

It'd be a disaster for GCN to have a SIMD16 programming model since 3/4ths of it's vector units would be deactivated when they couldn't be independently operated without separate scalar units to match. Now that RDNA has 2 scalar units per CU, it can support a SIMD32 programming model because of the corresponding ratio of scalar to vector units ... 

I wonder if AMD are going to extend RDNA to have a separate scalar unit for each vector unit to support a true SIMD16 mode ... 

Sure a driver's internal shader compiler could handle it but it'd be more juicier for a programmer to see the details and with next gen coming up it'd serve as another excuse for them to refactor their code to get the most out of it ... 



fatslob-:O said:
Pemalite said:

Really? From what I can tell AMD increased it to SIMD32 from SIMD16 on GCN and decreased wavefronts from 64 on GCN to 32.

So no longer does it take 4 cycles for a wavefront to be pushed through on GCN, it's now 1 on Navi, if they have a SIMD64 mode, that is news to me.

The Primitive Shaders are handled by the drivers compiler. I would assume developers have some aspects they can leverage from it.

From their prior high level block layout diagrams, physically speaking it looks like 4xSIMD16 vector units but GCN's programming model (most important aspect) is actually SIMD64 because the hardware itself executes a 64-wide wavefront width due to it's single scalar unit which handles very important operations such as control flow, branching, and addressing ... 

It'd be a disaster for GCN to have a SIMD16 programming model since 3/4ths of it's vector units would be deactivated when they couldn't be independently operated without separate scalar units to match. Now that RDNA has 2 scalar units per CU, it can support a SIMD32 programming model because of the corresponding ratio of scalar to vector units ... 

I wonder if AMD are going to extend RDNA to have a separate scalar unit for each vector unit to support a true SIMD16 mode ... 

Not what Anandtech is saying though.



--::{PC Gaming Master Race}::--

Pemalite said:
LGBTDBZBBQ said:
LOL next-gen consoles, I thought it's going to be 13tf-15tf according to the insider leaks

Teraflops isn't everything.

No, otherwise NVidia would get crushed by AMD already. It does tell the theoretical peak performance, but that's rarely achieved in Gaming, and far from it on AMDs current chips.

However, a console with 13-15TFlops in GCN performance was out of question from the get-go unless Navi would have become less consuming than Turing. And while Navi certainly bettered itself in that regard, it's still les powerful per watt then NVidias offerings.

For the record, I calculated the theoretical TFlops for the 5700 and the 5700XT and reached pretty exactly 7.5 TFlops and 9 TFlops respectively in gaming speed, so I'm sure they choose those clock speeds exactly for that reason. And considering the power consumption of either card, I think the 5700 (non-XT) is pretty much what will fuel the next gen consoles in one way or another.



Around the Network

RTG continues to need a full node advantage to hope to match Nvidia after Maxwell in efficiency. But now they are lagging even further behind than Polaris etc. with the competition's ray-tracing cores. Catastrophic.




 

 

 

 

 

Bofferbrauer2 said:
Pemalite said:

Teraflops isn't everything.

No, otherwise NVidia would get crushed by AMD already. It does tell the theoretical peak performance, but that's rarely achieved in Gaming, and far from it on AMDs current chips.

However, a console with 13-15TFlops in GCN performance was out of question from the get-go unless Navi would have become less consuming than Turing. And while Navi certainly bettered itself in that regard, it's still les powerful per watt then NVidias offerings.

For the record, I calculated the theoretical TFlops for the 5700 and the 5700XT and reached pretty exactly 7.5 TFlops and 9 TFlops respectively in gaming speed, so I'm sure they choose those clock speeds exactly for that reason. And considering the power consumption of either card, I think the 5700 (non-XT) is pretty much what will fuel the next gen consoles in one way or another.

Likely a cut down version.



Random_Matt said:
Bofferbrauer2 said:

No, otherwise NVidia would get crushed by AMD already. It does tell the theoretical peak performance, but that's rarely achieved in Gaming, and far from it on AMDs current chips.

However, a console with 13-15TFlops in GCN performance was out of question from the get-go unless Navi would have become less consuming than Turing. And while Navi certainly bettered itself in that regard, it's still les powerful per watt then NVidias offerings.

For the record, I calculated the theoretical TFlops for the 5700 and the 5700XT and reached pretty exactly 7.5 TFlops and 9 TFlops respectively in gaming speed, so I'm sure they choose those clock speeds exactly for that reason. And considering the power consumption of either card, I think the 5700 (non-XT) is pretty much what will fuel the next gen consoles in one way or another.

Likely a cut down version.

I expect that the GPU in the next gen has 36-44CU (if we can still talk about CU with RDNA), but at a lower clock speed than retail graphics cards. And integrated as the GPU part of an APU.

If it's not going to be an APU design, then I don't expect AMD to change the chip at all, it's gonna work with GDDR6 either way, if it got it all for itself or has to share it with the CPU. They probably just lower the clock speeds a bit to ensure stable, predictable performance.

haxxiy said:

RTG continues to need a full node advantage to hope to match Nvidia after Maxwell in efficiency. But now they are lagging even further behind than Polaris etc. with the competition's ray-tracing cores. Catastrophic.

Not totally sure about that.

I do think they still lag behind, but much less than you'd think at first. After all, the 225W of the 5700XT is not the TDP, but total board power, meaning it includes everything on the board, like the VRAMs for instance. It's quite possible the TDP of the chip alone is closer to 180W, and thus in the ballpark of the RTX 2070 (175W TDP). But we'll have to wait and see for power consumption benchmarks to have a clear picture here.

About Raytracing, considering that you'd prefer playing without raytracing right now if you want the best graphics and good framerates since it's simply too demanding even for a 2080Ti, I don't mourn it's absence. It does have the option to do Raytracing in it's Shaders though, but it's just not worth it yet, not on Radeon or an RTX.

On the other hand, Navi has an auto-sharpener to make the picture even crispier at almost no power cost (AMD states 1%), which is better than anything DLSS can do in theory (and everything is better than DLSS in practice; seriously DLSS should stand for Doesn't Look Smooth and Sharp)

Last edited by Bofferbrauer2 - on 11 June 2019

Bofferbrauer2 said:
Pemalite said:

Teraflops isn't everything.

No, otherwise NVidia would get crushed by AMD already. It does tell the theoretical peak performance, but that's rarely achieved in Gaming, and far from it on AMDs current chips.

However, a console with 13-15TFlops in GCN performance was out of question from the get-go unless Navi would have become less consuming than Turing. And while Navi certainly bettered itself in that regard, it's still les powerful per watt then NVidias offerings.

For the record, I calculated the theoretical TFlops for the 5700 and the 5700XT and reached pretty exactly 7.5 TFlops and 9 TFlops respectively in gaming speed, so I'm sure they choose those clock speeds exactly for that reason. And considering the power consumption of either card, I think the 5700 (non-XT) is pretty much what will fuel the next gen consoles in one way or another.

Yep radeon 5700 is 36CU's. this is what going in PS5 clocked at 1,8ghz for 8,3 Teraflops and I think Microsofts Xbox anaconda will have the same CU's as Xbox one x, 44 of them but 4 disable, probably clocked at 1750mhz for 8,9 Teraflops. Both consoles will probably be very similiar.

Last edited by Trumpstyle - on 11 June 2019

6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

XT anniversary is tempting actually, may actually consider it. Or may even wait till ampere.