By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - (Update) Rumor: PlayStation 5 will be using Navi 9 (more powerful than Navi 10), new update Jason Schreier said Sony aim more then 10,7 Teraflop

 

How accurate this rumored is compared to the reality

Naah 26 35.62%
 
Its 90% close 14 19.18%
 
it's 80% close 8 10.96%
 
it's 70% close 5 6.85%
 
it's 50% close 13 17.81%
 
it's 30% close 7 9.59%
 
Total:73

sounds interesting, can't wait for the first reveal



Around the Network

Sony has lied about performance every gen, so why not now? Remember how the PS2 could launch a missile?



Straffaren666 said:

Though, I agree with you that the leak most likely is a hoax. For instance, I find it very unlikely with a 22Gbps GDDR6 memory speed, a 320bit memory bus and 20-24GB memory. AFAIK, the GDDR6 specifications only support up to 16Gbps. Sure, the specifications has some leeway and I believe Micron already is selling 18Gbps versions, but Sony has been quite conservative when it comes to memory speeds before and I find it very unlikely they would take the risk of overclocking the memory so high on a mass market product like the PS5.

There are even 20Gbs versions.... but yh thats not what we will see in these consoles. However there are some things to consider..

Right now 1GB of GDDR5 from 6Gbs -8Gbs all have a bulk pricing (2000 units) of around $6/chip. For 1GB GDDR6 that range from 12Gbs to 14Gbs is between $9 - $11/chip. 

Now even if soy and MS aredoing "super bulk" orders (literally in the millons) that will only further bring the prices down by say about 20 - 30%. But then there is also the fact that sony/ms won't be using 1GB modules but will be using 2GB modules, which cost more than a single 1GB module but less than two 1GB modules.

Basically, if we assume that doing million scale type ordes for 14Gbs 2GB GDDR6 ram means they spend around $12/chip, then 16GB/20GB/24GB will cost them $96/$120/$144 respectively........ just for RAM alone. To put this in context the PS4 RAM budget back in 2013 was around $88. At around $12/2GB chip, this is why it would make more sense adding 4GB of DDR4 ram to 16GB of GDDR6 to have a total of 20GB of RAM.



tripenfall said:
Sony has lied about performance every gen, so why not now? Remember how the PS2 could launch a missile?

Technically even the PS2 could launch a missile....... 



Intrinsic said:

There are even 20Gbs versions.... but yh thats not what we will see in these consoles. However there are some things to consider..

Right now 1GB of GDDR5 from 6Gbs -8Gbs all have a bulk pricing (2000 units) of around $6/chip. For 1GB GDDR6 that range from 12Gbs to 14Gbs is between $9 - $11/chip. 

Now even if soy and MS aredoing "super bulk" orders (literally in the millons) that will only further bring the prices down by say about 20 - 30%. But then there is also the fact that sony/ms won't be using 1GB modules but will be using 2GB modules, which cost more than a single 1GB module but less than two 1GB modules.

Basically, if we assume that doing million scale type ordes for 14Gbs 2GB GDDR6 ram means they spend around $12/chip, then 16GB/20GB/24GB will cost them $96/$120/$144 respectively........ just for RAM alone. To put this in context the PS4 RAM budget back in 2013 was around $88. At around $12/2GB chip, this is why it would make more sense adding 4GB of DDR4 ram to 16GB of GDDR6 to have a total of 20GB of RAM.

I believe 16GB GDDR6 and 4GB DDR4 is feasible. Separate memory pools make sense from a performance point of view as well, since the latency is lower for the DDR4 memory and the bandwidth used by the CPU in a unified memory system nonlinearly reduces the bandwidth available to the GPU. As long as the performance characteristics of the GDDR6 bus is similar to that of the GDDR5 bus of the PS4, there would basically only be pros with seperate memory pools similar to this setup.

Edit: By similar performance characteristics, I ofcourse don't mean similar performance as for the PS4, but the same kind of characteristics i.e. about 10-15% of the GDDR6 bandwidth available to the CPU if the developers chose to access the GDDR6 memory from the CPU.

Also, I think Sony is prepared to take an initial loss on the PS5, something they not were on the PS4, thanks to their much improved financial situation and considering how important the PS business has become to them. I expect them to spend more money on memory for the PS5 than for the PS4.

Last edited by Straffaren666 - on 14 March 2019

Around the Network
tripenfall said:
Sony has lied about performance every gen, so why not now? Remember how the PS2 could launch a missile?

https://www.theverge.com/2015/1/15/7551365/playstation-cpu-powers-new-horizons-pluto-probe

Even PS1 chips were used on this type of stuff.

And considering we had missile during the 50's and stuff, I have no issue believing a PS2 chip could launch a missile.

Also this is a leak, not Sony claiming anything. And would you disprove PS4 or PS4Pro information Sony gave?



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Intrinsic said:
Bofferbrauer2 said:

The problem is, if you calculate the space of the 40 CU in the One X and compare it with the 64CU in Vega 64, then there's just almost no difference. The Jaguar CPU part in the One X is tiny, even in 28nm it only took 25mm2 (3.1mm2 per core times 8) plus the space of 4MiB cache, and in 14nm would be even smaller, the difference would be very small. If we take 300mm2 for the GPU part alone, then 64CU would be 480mm2, almost exactly the size of Vega 64, which is 486mm2 tall.

So no, the different memory controller will not magically shrink the chip by a large amount

If we are realistic, then this leak is a hoax, pure and simple. 64CU is the technical limit of GCN, and there's no known way around this, so 64CU at 1500Mhz would be the only option in your opinion. Even in a 7nm Navi like that would consume around 250W without even counting the CPU, RAM or any other part in the console.

In other words, the console would run way too hot and consume too much, nevermind the fact that the chip would be way too large.

Your math is off......

Ok.... at 28nm the PS4 had a die size of 348mm2. This shrunk to 321mm2 in the 16nm PS4pro. Yet they were able to double the amount of CUs in the Pro compared to the base PS4.

But lets keep it simple..... cuse reading all you are saying its like you are saying there willbe no difference between a 16nm/14nm chip and a 7nm one. So how about you just spell it out so I am not confused (though I feel I already am).

What do you believe they will be able to fit into a 7nm APU that is anywhere between 350mm2 to 380mm2?

As for technical limits of GCN.... thas just wrong. The Problem isn't the CU count,the problem is the shader engine count. GCN5 has only 4 of them, and the maximum connected to each one is  16CU (vega 64). The last time the amount of shader engines were increased was I think GCN3 or 4 (can't recall), but they have been increased before. And even AMD addressed ths recently in an interview where Raja mentioned that with vega they considerred increasing the number of SEs but they didn't have enough time. So its not like they don't know what to do about that or its some sort of impossible hurdle to overcome.

And this power draw thing...... you do know thats only a problem when you are trying to clock (already ineffecient) chips as high as possible right? And the best solution is to just have more CUs and not have to clock them too high though that could add complexity and affect yeilds. Like nothing stops them fromgoing with an 80CU APU with the GPU clocked at 1.1Ghz - 1.2Ghz. While the desktop iteratons of the same chips could be clocked at 1.5Ghz - 1.8Ghz.

I have said this multiple times already....... until we actualy see working Navi basd hardware, no one (myself included) can say with any certainity what is or isn't possible all based on assumptions made from an older microarch.....

Oh just to add..... I don't believe this (or anyone so ar for that matter) rumor.

@bolded: I said that before already. However, the way they are organised only allows for 64CU. And that's compute engines, not shader engines, which is something different entirely (basically a rebrand of the GCA, the Graphics and Core Array, introduced with GCN2). GCN5 only has 4 of them because they can only 4 of them reliably with instructions. Technically they could go past the 64 with more compute englines, but it wouldn't actually increase the performance as the CU would be idling half the time because they don't get any instructions. Hence why it's agreed that 64CU is the limit.

I know Vega is quite efficient around 1150Mhz (I love undervolting my hardware). But to reach those 12-14TFlops and with the practical limitation to 64CU like discussed above, the only way is to clock the chip way past it's sweet spot.

The proposed 80CU @1.2Ghz would probably not beat a Vega 56 because, like I said, the GCN architecture couldn't feed that many CU with instructions. The scheduler, from which the instructions are coming and  then dispense them to the SE, is the limitation. And that can only be removed by a complete redesign. Hence why AMD never even showed a successor to Navi in the roadmaps, they probably knew already back then that Vega was as far as they could go.

@italic: What I heard was that they could have done so, but decided against it because it wouldn't have removed the problem.

I agree that we need to see Navi to be sure about everything. Since it's so long since Vega, it's possible that they found a way around the problem, rendering our discussion here moot.

@underlined: English ain't my first language (or second or even third for that matter), so sorry if I was confusing you. But I calculated the size in 7nm in a previous answer; there I was just proving his assumption wrong that Vega is just so big due to his HBM memory controller and showed him that the One X, which he drew in as comparison, would be just as large with 64CU (if made in the same process of course) despite having a GDDR5 memory controller.



tripenfall said:
Sony has lied about performance every gen, so why not now? Remember how the PS2 could launch a missile?

Can do! I've launched a multitude of various missile types with my ps2slim. Upgraded (two ps2 slim stacked) it will be capable of launching even hypersonic missiles. So far none of them were armed, test launches only so to say. But soon the time will come...



Hunting Season is done...

Bofferbrauer2 said:

@bolded: I said that before already. However, the way they are organised only allows for 64CU. And that's compute engines, not shader engines, which is something different entirely (basically a rebrand of the GCA, the Graphics and Core Array, introduced with GCN2). GCN5 only has 4 of them because they can only 4 of them reliably with instructions. Technically they could go past the 64 with more compute englines, but it wouldn't actually increase the performance as the CU would be idling half the time because they don't get any instructions. Hence why it's agreed that 64CU is the limit.

That is not how GCN works. Each CU has its own instruction scheduler. The command processor can process commands at a far higher rate than the front-end can process them and the same applies for the front-end to the back-end (SEs/CUs). It's very trivial to saturate the CUs with wavefronts and that happens all the time in the current design. That said, the processing power of some parts of the front-end probably should be increased if the CUs are increased to keep the architecture balanced.

I'm not aware of any technical limitations to go above 64 CUs but they probably exist. I suspect the biggest hurdle is to increase the bandwidth of the L2 cache, which should scale with the number of CUs to keep the system balanced.



Those are some really good specs. I'm pretty hyped.