By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - What do yo think will be the hardware specifications of PS5 if it arrives arround 2019-2020?

fatslob-:O said:
Bofferbrauer2 said:

You do.

Just for comparision's sake RX Vega has less than 500GB/s even in the liquid version which clocks in at 13.7 TFlops which by 7nm will be about the maximum a console can reach before getting too expensive even by 2020-2021.

What you failed to see possibly is that in the last couple years several techniques had been developed to limit the bandwidth consumption of GPUs, with low-level programming further toning it down (less drawcalls and unnecessary transfers). Without those the Switch would be choked by it's bandwidth, as would Smart devices and APUs. And GPUs also, because then we would already need more than 1TB/s in Bandwidth in high-end GPU

Switch is choked by it's bandwidth and so are high end gfx cards ... 

"Draw calls" is just hype, that mostly helps at the CPU side but not so for the GPU side and low level programming mostly improves shader throughput but if you're hard limited by bandwidth there's very little a GPU could offer in terms of hardware feature set that you could use to alleviate it ... (aside from maybe some sort of compression formats or doing data packing tricks)

But not nearly as much as you think. Especially not in consoles where limits are put into frame counts. If you Vsync a GTX 1080 at 60FPS bandwidth becomes basically irrelevant as it doesn't come close to being choked by it anymore.

Besides, at 4K, the bandwidth is negligible as the raw power of the GPU is the limiting factor by then. Even a 10% bandwidth increase often only results in 1-2% performance increase in 4K, if any at all.



Around the Network

2020 lauch.

Gpu: not a big jump, they will taut 10TF, will most like run like a RX580.
Cpu: big jump here, custom ryzen 8 cores.
Memory: 4GB ddr4 for system and dedicated 16 GB HBM on games, big jump in bandwidth.

Extras: 2TB HD, not true 4k in all games, will use "checkboarding" in most big games, will cost $499, full backward whit ps4.



fatslob-:O said:

GDDR5X is most definitely a different standard from GDDR5. Both have different physical packaging and GDDR5X operates at lower voltages too so it is not some extension. A hardware vendor can't just slot in a GDDR5X module in place of a GDDR5 module and that's partially why AMD is in a bind since they didn't design Vega to be compatible with GDDR5X so we're either going to have to wait until the 2nd gen Vega comes or if not that then Navi ... 

Personally I hope we get GDDR7 standard finalized in 2020 for a 2021 release so console hardware manufacturers can use it if they launch their consoles during that year ... 

It is a different standard than GDDR5. But it's also an extension of it with a few features taken from GDDR6.

fatslob-:O said:

If we can't get GDDR7 then hopefully they make do with just GDDR6X by just doubling the bandwidth and not touching the densities. I'd be content with just 16GB with 4 GB of DDR5 dedicated to the whole background ... 

8 Zen 3+ cores (with AVX-512 + TSX), successor to Navi microachitecture and at least 1 TB/s (wanted 2 TB/s) for PS5 should be the baseline ... (next generation could be our last one)

I doubt GDDR6X is going to be a thing.

fatslob-:O said:

HBM won't be feasible for next gen consoles by then ? HBM technology when it first hit the market (AMD Fiji) will be 5 years old by then if next gen consoles release in 2020 and 6 years old if they release in 2021 ...

The age of something is ultimately irrelevant. It's either expensive to manufacture or it's not.
Otherwise you are delving into a logical fallacy.

HBM1 thanks to it's interposer is more expensive than conventional DRAM to manufacture and implement.

fatslob-:O said:

The problem with LPDDR4 is that it's gong to go out of production by mainstream memory module manufacturers before next gen consoles launch so I'd prefer if we adopted the latest standard instead and if we want lower power consumption out of DDR5 we can just clock it lower ... 

Has LPDDR5 even been ratified yet? It took years for the industry to transition from LPDDR3 to LPDDR4 even in the mobile space.

fatslob-:O said:

No, bandwidth is going to become a massive bottleneck in the coming years and graphics programmers will have to rethink optimizations on the algorithmic side. Base PS4 has a flops/byte ratio of 10.4 while it it is nearly doubled on PS4 Pro with 18.9 flops/byte ratio while RX Vega 64 is pegged at an alarming 26.2 ratio. 800 GB/s is hardly enough for next generation and Sony (not just them but the whole industry) will definitely need checkerboard rendering to amplify the low bandwidth deficit ...

I agree. Bandwidth is becoming a big issue, especially at higher resolutions.
Hopefully AMD can actually make it's architecture more efficient by then.

SvennoJ said:

4K adoption is happening, yet caring for 4K content, not so much. I prefer playing SotC at 60fps, 1080p60 with HDR is fantastic, couldn't care less about it not being in 4K. I play GT Sport in 4K or on my old 1080p tv, the only difference I see is when I look at the far background on some of the tracks. 99% of the time, it doesn't matter. HDR makes the difference.

I prefer everything to be in 4k. Even on a 1080P display. Why? Down-sampling.

And even if you are still running with an antiquated 1080P display, once you upgrade to 1440P or 4k, you get a free boost to visuals.


SvennoJ said:

I really hope next gen will target 1080p60 with improved lighting to make the most out of HDR. Yet I'm sure next gen will all be about native 4K.

It really depends on the PC. If the PC isn't doing 4k on mid-range hardware in 2020, then next gen consoles aren't.

It's as simple as that.

Bofferbrauer2 said:

Besides, at 4K, the bandwidth is negligible as the raw power of the GPU is the limiting factor by then. Even a 10% bandwidth increase often only results in 1-2% performance increase in 4K, if any at all.

But you start throwing next-gen bandwidth intensive rendering techniques... Then bandwidth becomes a factor.
I don't think people understand how intensive a next-gen Global Illumination implementation can be.



--::{PC Gaming Master Race}::--

Manlytears said:
2020 lauch.

Gpu: not a big jump, they will taut 10TF, will most like run like a RX580.
Cpu: big jump here, custom ryzen 8 cores.
Memory: 4GB ddr4 for system and dedicated 16 GB HBM on games, big jump in bandwidth.

Extras: 2TB HD, not true 4k in all games, will use "checkboarding" in most big games, will cost $499, full backward whit ps4.

well, 10 teraflops is not great, but it´s a jump. from 1.84 to 10 teraflops, you have 5,4 times more calculations.

They have to think well about mem bandwith. Because HBM2 is expensive, and using DDR6 more bus width is more power needed.

CPU is a must. This article from Ubisoft shows that when full utilized, PS3 CPU is on the same level , even a little faster than PS4 (without using GPU for calculations).

http://www.redgamingtech.com/ubisoft-gdc-presentation-of-ps4-x1-gpu-cpu-performance/



Manlytears said:
2020 lauch.

1. Gpu: not a big jump, they will taut 10TF, will most like run like a RX580.
2. Cpu: big jump here, custom ryzen 8 cores.
3. Memory: 4GB ddr4 for system and dedicated 16 GB HBM on games, big jump in bandwidth.

4. Extras: 2TB HD, not true 4k in all games, will use "checkboarding" in most big games, will cost $499, full backward whit ps4.

  1. At this point, and since they obviously will wait for 7nm fabrication. Having only 10TF is a joke and almost impossible. Look at it this way. Going from 28nm to ~14nm fabrication resulted in going from 1.2TF GPU to 6TF GPU. And you think going from 14nm to 7nm will not even double that 6TF but end up at 10TF? How? Why?

  2. Considering Jaguar..... anything else is probably a big jump anyways.

  3. The HBM thing again, consoles are just built that way. Using HBM instead of GDDR6 will literally mean they have to spend about 30-50% more for the same amount of memory. They will rather put that money into having more GPU. Makes no sense really. Its not like what they can get with GDDR6 and still be running well cheaper isn't good enough.

  4. Doubt the 2TB thing. Think more like external HDD support from day one.


Around the Network
Intrinsic said:

 

  1. At this point, and since they obviously will wait for 7nm fabrication. Having only 10TF is a joke and almost impossible. Look at it this way. Going from 28nm to ~14nm fabrication resulted in going from 1.2TF GPU to 6TF GPU. And you think going from 14nm to 7nm will not even double that 6TF but end up at 10TF? How? Why?

 

You should compare original ps4 to ps4 pro. You see a 1,8 TF jump to 4,2 TF, overall 2,3x increase. And this gave a higher power consumption when comparing the two consoles. Expect something similiar with Ps5 if they launch 2019-2020 with 7nm transistor technology. About 9 TF is my guess.

If they go for a higher TF number, they either has to increase the chip size or go with higher clock speed. Which just lead to higher cost for a bigger chip and more expensive cooling.



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Trumpstyle said:

You should compare original ps4 to ps4 pro. You see a 1,8 TF jump to 4,2 TF, overall 2,3x increase. And this gave a higher power consumption when comparing the two consoles. Expect something similiar with Ps5 if they launch 2019-2020 with 7nm transistor technology. About 9 TF is my guess.

If they go for a higher TF number, they either has to increase the chip size or go with higher clock speed. Which just lead to higher cost for a bigger chip and more expensive cooling.

Why though? we could just as easily compare the OG PS4 to the XB1X. Afterall we are locking at the same architecture but with 40CUs(4 deactivated) on the PS4pro and 44CUs(4 deactivated) on the XB1x. The XB1X GPU is just running at a much higher clock.

I dont see how you can say all you have said and just ignore that the XB1X exists. Forget the cost saving shenanigans in the PS4pro. The XB1X represents whats possible. Look at it this way, using the exact same chip size in the XB1X but at a 7nm node will give you 88CUs, where if clocked identically would give you at least 12TF. And this is assuming its the exact same architecture. 

But it won't be, so those 88CUs will perform better and be more efficient, clock speeds will go up by at least 10-20%.... all that is already putting you in 14/15TF territory right there. 

Considering how easy it would be for them to hit 12-15TF GPU and have that for the next 6-8yrs you really think they will puss out on that and settle for 9-10TF? When they know they have competition in the Xbox? Not a chance.



Trumpstyle said:
Intrinsic said:

 

  1. At this point, and since they obviously will wait for 7nm fabrication. Having only 10TF is a joke and almost impossible. Look at it this way. Going from 28nm to ~14nm fabrication resulted in going from 1.2TF GPU to 6TF GPU. And you think going from 14nm to 7nm will not even double that 6TF but end up at 10TF? How? Why?

 

You should compare original ps4 to ps4 pro. You see a 1,8 TF jump to 4,2 TF, overall 2,3x increase. And this gave a higher power consumption when comparing the two consoles. Expect something similiar with Ps5 if they launch 2019-2020 with 7nm transistor technology. About 9 TF is my guess.

If they go for a higher TF number, they either has to increase the chip size or go with higher clock speed. Which just lead to higher cost for a bigger chip and more expensive cooling.

Of course I would like more TFlops, but let´s be practical here. With a balanced memory hierarchy and 9 Teraflops, The PS5 would be just like the PS4 in full HD. I mean, most games (not all) run 30 fps 1080p with 1.84 Teraflops on PS4 with a crappy CPU. 

1,84 x 4 = 7,36 Teraflops. So an 8 teraflops machine would be able to run in 4k 30 fps in most games if data feeds the gpu acordingly. Now, If you tell me, I would like at least 30 % more perf than that for anti aliasing and better dinamic resolution if games run at 60 fps. That´s a little more than 10 teraflops. Would that feel like a new console generation?. No, but it will look nicer than a 1080p standar PS4 for sure. 



Intrinsic said:
Trumpstyle said:

You should compare original ps4 to ps4 pro. You see a 1,8 TF jump to 4,2 TF, overall 2,3x increase. And this gave a higher power consumption when comparing the two consoles. Expect something similiar with Ps5 if they launch 2019-2020 with 7nm transistor technology. About 9 TF is my guess.

If they go for a higher TF number, they either has to increase the chip size or go with higher clock speed. Which just lead to higher cost for a bigger chip and more expensive cooling.

Why though? we could just as easily compare the OG PS4 to the XB1X. Afterall we are locking at the same architecture but with 40CUs(4 deactivated) on the PS4pro and 44CUs(4 deactivated) on the XB1x. The XB1X GPU is just running at a much higher clock.

I dont see how you can say all you have said and just ignore that the XB1X exists. Forget the cost saving shenanigans in the PS4pro. The XB1X represents whats possible. Look at it this way, using the exact same chip size in the XB1X but at a 7nm node will give you 88CUs, where if clocked identically would give you at least 12TF. And this is assuming its the exact same architecture. 

But it won't be, so those 88CUs will perform better and be more efficient, clock speeds will go up by at least 10-20%.... all that is already putting you in 14/15TF territory right there. 

Considering how easy it would be for them to hit 12-15TF GPU and have that for the next 6-8yrs you really think they will puss out on that and settle for 9-10TF? When they know they have competition in the Xbox? Not a chance.

As long as they sell 2 to 3 times more consoles and have much better exclusives, I guess they will concentrate in a good lauch price regarding how much teraflops they loose in making that choise. After all they can sell a more powerfull PS5 pro to nerds like me that care about that shit. 



Intrinsic said:
Trumpstyle said:

You should compare original ps4 to ps4 pro. You see a 1,8 TF jump to 4,2 TF, overall 2,3x increase. And this gave a higher power consumption when comparing the two consoles. Expect something similiar with Ps5 if they launch 2019-2020 with 7nm transistor technology. About 9 TF is my guess.

If they go for a higher TF number, they either has to increase the chip size or go with higher clock speed. Which just lead to higher cost for a bigger chip and more expensive cooling.

Why though? we could just as easily compare the OG PS4 to the XB1X. Afterall we are locking at the same architecture but with 40CUs(4 deactivated) on the PS4pro and 44CUs(4 deactivated) on the XB1x. The XB1X GPU is just running at a much higher clock.

I dont see how you can say all you have said and just ignore that the XB1X exists. Forget the cost saving shenanigans in the PS4pro. The XB1X represents whats possible. Look at it this way, using the exact same chip size in the XB1X but at a 7nm node will give you 88CUs, where if clocked identically would give you at least 12TF. And this is assuming its the exact same architecture. 

But it won't be, so those 88CUs will perform better and be more efficient, clock speeds will go up by at least 10-20%.... all that is already putting you in 14/15TF territory right there. 

Considering how easy it would be for them to hit 12-15TF GPU and have that for the next 6-8yrs you really think they will puss out on that and settle for 9-10TF? When they know they have competition in the Xbox? Not a chance.

I'm not ignoring xbox one x. Xbox one x proves my point. Microsoft had to go with a bigger chip and more expensive cooling to get those extra Flops. So if we assume 7nm will be able to double the Flops with same power consumption we will get 12 TF. But this require a bigger chip and more expensive cooling.

How much improvement we will get from 7nm compared to 16nm we don't know exactly. TSMC (The chip manufacturer) has said 7nm brings about 60% power savings. But this is probably a bit optimistic.

And I don't expect sony to go that route microsoft went with xbox one x. They wanna keep the power consumption at the same level and keep cost down.

About Amd next architecture Navi, I'm not expecting much at all. Polaris was bad, Vega was a disaster and we already hearing Navi will be worst than Vega so don't expect much from Navi.



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!