By using this site, you agree to our Privacy Policy and our Terms of Use. Close
JEMC said:

And Pemalite, I don't know iwhat would AMD get from going back to a previous design model. Wouldn't it be easier to do as Nvidia and add some compute cores (Tensor cores) like they're already doing iwth the RT ones?
The enterprise cards based on CDNA could probably benefit the most from going with a more compute capable architecture design.

I think "regressing" the design to a previous model is the wrong way to look at it, so I could have worded things a little better.

VLIW or "Very long Instruction Word" tends to be very compiler heavy, think of it as sort of like "Hyper Threading" in a way, if you can keep all those threads fed and busy (Hence why it's compiler heavy), then you can obtain some very impressive throughput.

AMD went from VLIW5 to VLIW4 (5 threads per pipeline to 4) with the move from the Radeon 5800 to 6900 because they saw a change in how games were being rendered and one of those threads was often being under utilized.
Eventually GCN would abandon it completely.

However the way it works is that each thread tends to work on a specific type of workload.

So on a VLIW5 cluster you could have one thread that is optimized for special functions that is also the only one able to handle integer multiply's.
And the others would work on simpler integer operations.

VLIW is very effective at extracting parallelism without increasing core complexity significantly.

What AMD might be trying to accomplish (Again, if the rumors in my enthusiast circles pan-out) is AMD may try to build it's RDNA cores to be a little more flexible so that each core might be capable of handling two workloads.
Think: Rasterization+Ray Tracing in tandem. Or Rasterization+Tensor operations. - Which means as we scale in core counts, so does the performance of those aspects.

Of course this is all rumor at this stage.

Bofferbrauer2 said:
Pemalite said:

7700XT should be roughly inline with the 6800XT I would imagine... So not a catastrophic leap over the current Series X/Playstation 5 unless they chase clockrates.

The big boon will of course be in Ray Tracing... And maintaining higher framerates/resolutions where the current gen consoles struggle.

Expectations are 6900XT-like performance. And probably also needed to keep up with NVidia.

Pemalite said:

Also along the PC enthusiast rumor mill (Grain of salt) is that AMD will be regressing back to their VLIW paradigm with RDNA3 that they defined with the Radeon HD 2000/3000/4000/5000/6000 series. But it will be more along the lines of VLIW2 rather than VLIW5 or VLIW4... And that would make these parts extremely compelling from a compute standpoint.

I think that's for CDNA, not RDNA. Those would definitely profit more from an increased compute performance than RDNA cards.

Possibly. But keep in mind that Ray Tracing is a very compute demanding operation.

JEMC said:
Bofferbrauer2 said:

I actually think it has to do with the fact that AMD CPUs with a TDP of 105W tend to pull quite a bit more than that at full load, generally 130-140W. Meanwhile, Threadripper CPUs actually don't exceed the 280W they're rated for. I think what AMD is doing is making the TDP more honest and like the real power draw, as they do on the Threadripper series.

Of course, the higher clock speeds and integrated graphics also make for the number to be higher than it would have been with the previous generations, where 140W would have been a truly honest TDP.

I wish you're right and that AMD is moving in a more honest direction, but we'll see.

It will be good for the entire industry if they do.

Still though, if Ryzen 7000 has better idle power consumption, then total power consumption will drop anyway... As a PC typically spends most of it's time at idle.

I am okay with 200w+ TDP's provided I get the performance to go with it.



--::{PC Gaming Master Race}::--