By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - DDR5 Could Reach The Consumer Market As Early As 2019

Pemalite said:
Bofferbrauer2 said:

Probably GDDR6, as it theoretically consumes much less power for the same bandwith compared to GDDR5(X). This leaves quite some more Watts to spend on other things like CPU or GPU

Not to mention the increase in chip densities. Which will be important if you want to aim for 16GB/24GB/32GB of DRAM in the next gen and still be affordable.

Bofferbrauer2 said:

Even then, in dual channel we're still below 60GB/s, which is barely enough for 16 CUs maximum without starving them to death with the low bandwith. There's a reason why the APUs stayed at 8 CUs for so long and only going up to 10 with Raven Ridge now.

DDR5 won't start with that speed. Just like DDR4 starts off at 1866, I expect DDR5 to start off as low as 4000 or even just 3600 while it would need 6400 for the 51.2GB/s per channel. So it will take a while until that bandwith is actually available for APUs, and by that time their power compared to high-end GPUs won't budge very much - it might even drop even further down the line depending on how fast the top of the line evolves.

You are only comparing raw bandwidth. Which is going to be inaccurate.

nVidia and AMD are constantly inventing new technologies to save bandwidth you know with various forms of culling and compression.
So your "16 CU Maximum" claim is without basis.

The main limiter for why APU CU counts haven't exploded is entirely down to costs... AMD typically reserves roughly 50% or more of the die space on an APU... Entirely to the GPU. And that's with current CU counts on chips like Kaveri with up-to 8 CU's. 8.

This could have hold true until Bristol Ridge, but then with Raven Ridge the jump should have been bigger than just going up to 10 NCU.

I'm not just comparing raw bandwith. While I didn't mention it, I'm also taking the clock speeds and GCN evolutions into account. The GPU Clock in the APUs have risen substantially since Trinty (I'm leaving out Llano because it's VLIW5 design isn't really comparable with it's successors). Trinity came with 800Mhz VLIW 4, Kaveri with 866Mhz GCN 2 and Bristol Ridge pushed it to 1108Mhz GCN 3. That's a roughly 40% increase in clock speed and over 60% increase in performance and hence throughput without increasing the actual number of Compute Units. So while I believe the number of CU will only increase by 60%, the performance and throughput should rise at the same time by about 100%, if not more. I'm expecting those 16CU to run at close to 1500Mhz for instance and the GCN being much more performant by that time as they are now.



Around the Network
KBG29 said:

People also said PS4 would only have 3 or 4GB of RAM, and most thought 8GB was outlandish if you were predicting that. And as far as increases in memory density goes, I keep reading articles that we are going to see massive leaps in memory and storage tech over the next couple of years, which should lead to much more RAM than people are expecting, and SSD storage.

Yeah nah. People claiming 3-4GB of Ram for the PS4 isn't as outlandish as claiming next gen is going to have 128GB.
I think you seem to be conflating the fact that... The technology existed for the Playstation 4 to economically have 8GB of Ram... The technology is NOT going to exist for consoles to economically have 128GB of Ram.
I tend to be correct on my predictions on technology that ends up in consoles... If you care to read back on the Xbox One, Playstation 4, Xbox One X, Playstation 4 Pro and the Nintendo Switch.

Besides. The Playstation 4 *was* only going to have 4GB of memory at one point. Even the developer kits only had 4GB.
https://www.videogamer.com/news/ps4s-8gb-ram-was-kept-secret-from-third-party-devs-until-console-reveal

Why? Because it wasn't untill 2013 that higher-density 4 Gb memory chips became available... In-case you aren't aware, the Playstation 4 uses 16 memory chips.
Which meant if they used the memory chips that were only half the density which was all that was available prior to 2013... Then the console would only have 4GB of Ram.

https://en.wikipedia.org/wiki/GDDR5_SDRAM#Commercial_implementation

KBG29 said:

I believe PS5 will be an 8K console to push 8K TV which is going to start to get a huge push over the next few years.


8k is a bit far fetched when the base Playstation 4 struggles to obtain 1080P in every title... And the Playstation 4 Pro rarely hits 4k... And thus relies on various tricks to make a "fake" 4k image.

Besides. We are only in the starting phase of transitioning to 4k, 8k is a pipedream that is a long ways off... And we will also have 5k as the interim. (4x Quad HD anyway?)

KBG29 said:

 It will also need to support 4K per eye VR. Both of these will require this much larger GDDR RAM pool.
VR doesn't seem to be getting the traction most thought it would.

I doubt VR is ever going to become a "thing". - Companies are closing, the games have been "meh" at best... And the price of entry has made it only viable for those who are genuinely interested in the technology.

KBG29 said:

 Both of these will require this much larger GDDR RAM pool. The DDR RAM pool will need to be massive as well, as PS5 will continue to push content creation, and allow much more multitasking, with stuff like video editing being possible at all times.


Consoles and Content Creation? Nope.
Video editing with a console controller? Nope.
Multi-tasking like what you see on a PC using a controller? Nope.

The Xbox One tried multi-tasking, implemented features like snap, voice and gesture controls and so on.... It gets in the way of why you buy these devices to beging with... To play video games. Consequently, microsoft has been backtracking on it's multi-tasking features like snap on a quest to save more memory and processing.

Bofferbrauer2 said:

This could have hold true until Bristol Ridge, but then with Raven Ridge the jump should have been bigger than just going up to 10 NCU.

 

Raven Ridge has 11 CU's.
11 x 64 = 704 shaders.
It should drive up the clock rates though.

And that is enough to compete with the Playstation 4, which is only GCN 1.0 (Albeit customized) and is regarded as inefficient by modern standards.

We have certainly come a long way since everyone was saying the Playstation 4 is the fastest platform to exist because no other platform has 8GB of GDDR5 Ram. :P



--::{PC Gaming Master Race}::--

Yes im so ready to buy a new pc



REQUIESCAT IN PACE

I Hate REMASTERS

I Hate PLAYSTATION PLUS

Pemalite said:
Bofferbrauer2 said:

This could have hold true until Bristol Ridge, but then with Raven Ridge the jump should have been bigger than just going up to 10 NCU.

 

Raven Ridge has 11 CU's.
11 x 64 = 704 shaders.
It should drive up the clock rates though.

And that is enough to compete with the Playstation 4, which is only GCN 1.0 (Albeit customized) and is regarded as inefficient by modern standards.

We have certainly come a long way since everyone was saying the Playstation 4 is the fastest platform to exist because no other platform has 8GB of GDDR5 Ram. :P

Amd only has announced a Vega 10 and Vega 8 Raven Ridge yet, so with 10 and 8 CU. While it's rumored to have 11 CU, there haven't been any prrofs for this yet. Even then, the jump is too small compared to the win in space from the shrink from 28nm to 14nm to not being limited by something else. I doubt it's the TDP judging by the mobile polaris chip RX 480M, as that one has 16 CU with only 35W TDP. Which only leaves the bandwith, as that one can't feed more than that without starving them to death.

Enough to compete with the PS4? With XBO for sure, but for the PS4, at 11 CU it would nominally need a stable 1300Mhz clock speed to even just match the console.

While GCN 1 is pretty inefficient compared to GCN 4 (Polaris) and 5 (Vega), there's virtually no gain in base performance outside of new DX12 instructions which no game yet uses. An R7 260X (Bonaire, GCN 2) is virtually the same with the RX 460 (Baffin, GCN 4) when comparing the 2GiB models (I picked these cards as they are the most easely comparable, with same amount of CU, VRAM and even their clock speeds (only 10mhz difference) are almost the same) in DX 11 games. This shows that while GCN got more efficient over the years, the performance of the GCN Units hasn't changed much. So while Raven Ridge can be roughly on par with a PS4 in graphical prowess, it's not enough yet to outshine the console in that domain. My guess is about 1.5 TFlops, in between XBO and PS4 calculating power.



Bofferbrauer2 said:

Amd only has announced a Vega 10 and Vega 8 Raven Ridge yet, so with 10 and 8 CU. While it's rumored to have 11 CU, there haven't been any prrofs for this yet.

There have been leaked benchmarks, roadmaps and other information. There is a 50+ page thread over on Anandtech about it.

It's like when Zen, Bulldozer and Phenom CPU's were being leaked, people disregarded the information even though those leaks had a ton of supporting evidence/leaks and turned out to be legitimate.
When enough of something gets leaked from more than one source, you can start taking it seriously to a degree.


Bofferbrauer2 said:

Enough to compete with the PS4? With XBO for sure, but for the PS4, at 11 CU it would nominally need a stable 1300Mhz clock speed to even just match the console.

You are basing that on flops. I can elaborate on why that is not an accurate way to go about things. But do I really need to?

Bofferbrauer2 said:

While GCN 1 is pretty inefficient compared to GCN 4 (Polaris) and 5 (Vega), there's virtually no gain in base performance outside of new DX12 instructions which no game yet uses.

 Blatantly false.

Bofferbrauer2 said:

An R7 260X (Bonaire, GCN 2) is virtually the same with the RX 460 (Baffin, GCN 4) when comparing the 2GiB models (I picked these cards as they are the most easely comparable

GCN2 to GCN4 isn't the same as GCN1 to GCN5.

Bofferbrauer2 said:

 My guess is about 1.5 TFlops, in between XBO and PS4 calculating power.

Again with the flops?



--::{PC Gaming Master Race}::--