By using this site, you agree to our Privacy Policy and our Terms of Use. Close
HollyGamer said:
Pemalite said:

We already have Zen 2 with cutdown caches and various chips with differing clockrates.
The Ryzen 5 3500 has half the cache as the 3500X for example and the mobile Ryzen 4000 series has significantly less cache again.

Jaguar was AMD's absolute WORST CPU at a time when AMD's entire CPU lineup was garbage.

I would need a link to the Spencer quote... Because if he is using the Xbox One X as the comparison point, that likely includes all the improved CPU capability as Microsoft shifted a heap of processing to the command processor on the GPU portion of the SoC.

Not to mention there are some instructions that Ryzen can do that will simply be multiples better than Jaguar, things like AVX2.

 "...Microsoft stated that the console CPU will be four times as powerful as Xbox One X;..."

https://en.wikipedia.org/wiki/Xbox_Series_X#cite_note-gamespot_series_x-8

https://www.windowscentral.com/xbox-series-x-specs

https://www.gamespot.com/articles/goodbye-project-scarlett-hello-xbox-series-x-exclu/1100-6472190/">https://web.archive.org/web/20191213021815/https://www.gamespot.com/articles/goodbye-project-scarlett-hello-xbox-series-x-exclu/1100-6472190/

Not that I didn't believe you, but cheers.
Watched the video and it is indeed 4x over the Xbox One X, but they didn't go into any specifics.

But 4x the performance of the Jaguar 2.3Ghz 8-core CPU's is a low ball in my opinion... And it doesn't explain what kind of workload. - But if they are accounting for the command processor that offloads CPU duties, then it would seem more plausible.

For example Zen+ is 137% faster than Kabini, per core, per clock cycle in integer heavy workloads, Zen 2 increases that gap considerably... Then dump higher clockrates and thread counts on top of that and things get interesting really quickly.

SvennoJ said:

Pemalite said:

Not a dedicated chip. It only has the one chip, the main SoC.
The Ray Tracing is done on dedicated Ray Tracing cores on the main chip, it's part of the GPU... Which is why Flops is a joke as Flops doesn't account for the Ray Tracing capabilities.

Not entirely accurate.
Historically what the industry has done was made dedicated DSP/ASIC/FPGA as a separate chip (Aka. Ray Processing Unit) that handled Ray Tracing duties, granted this was for more professional markets... But the point remains.
Dedicated Ray Tracing chips have existed even as far back as the late 90's/early 2000's.

https://en.wikipedia.org/wiki/Ray-tracing_hardware

What I was hinting at is that there are more ways to do ray tracing, and dedicated hardware can help or can hinder innovation. Or rather there is not a simple switch to add ray tracing to a game by turning the chip on. Plenty other things need to be done (which will slow down the rest) to make the best use of the ray tracing cores. But it will help. Software only ray tracing would severely restrict the resolution to make it feasible. (Or needing a lot of shortcuts making it far less impressive)

Anyway as long as I can still easily tell the difference between my window and the tv, not there yet :)

Bandwidth and cache contention are very real issues in GPU's, so you are right that engaging things like Ray Tracing can introduce new bottlenecks into a GPU design that will bring down performance in other areas.

The dedicated Ray Tracing cores simply offloads the processing that would have been done on your typical shader cores, which can thus keep doing their own rasterization duties, so the performance hit is less significant.

In saying that, like all things rendering, there is a balance to be found, it will be interesting to see how Ray Tracing is leveraged next-gen considering it's one of the long sought after crown-jewels of game rendering, but it's the 10th gen console era where the technology will come into it's own I think.

Last edited by Pemalite - on 20 February 2020

--::{PC Gaming Master Race}::--