By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - (Update) Rumor: PlayStation 5 will be using Navi 9 (more powerful than Navi 10), new update Jason Schreier said Sony aim more then 10,7 Teraflop

 

How accurate this rumored is compared to the reality

Naah 26 35.62%
 
Its 90% close 14 19.18%
 
it's 80% close 8 10.96%
 
it's 70% close 5 6.85%
 
it's 50% close 13 17.81%
 
it's 30% close 7 9.59%
 
Total:73

I cannot see them doing those specs at $500, unless they are taking a massive loss per unit, which would not make much sense. But even so, I do believe $500 is a steep asking price, so they really need to justify it if they want to have the volume sales they got with PS4. I can't see this rumor being true.



Around the Network

Gamers will work five jobs to afford it...

Well, it looks great on the tech side, but price and games are also important. And this rumour makes at least price a risk. Games may be well, Nintendo showed that only one out of the park level great game is enough to sell a console. Sony probably securing a lot of 3rd-party support probably needs way less.



3DS-FC: 4511-1768-7903 (Mii-Name: Mnementh), Nintendo-Network-ID: Mnementh, Switch: SW-7706-3819-9381 (Mnementh)

my greatest games: 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024

10 years greatest game event!

bets: [peak year] [+], [1], [2], [3], [4]

HollyGamer said:
Bofferbrauer2 said:

GCN is limited to 64 CU, so the only way to get Navi faster than that is to clock it faster. To reach 14TFlops almost 2Ghz would be needed, way too much for GCN.

Radeon VII is able to beat the Vega 64, but for this you would need a chip that beats Vega VII while just consuming less then half of it. That's just not feasible short of a wonder on Navi.

And don't worry, I took chip prices in the next year into account when I said it's going to cost twice those 500$ in production. CPU/GPU ( possibly combined in an APU, but I doubt it) alone will not come cheap as their footprint is much higher than on this Gen. The larger the chip, the higher the price, exponentially so in fact. An APU with Zen2 and Navi with 64 CU will be at least around 500mm2 in 7nm, way too expensive to sell for under 400$ a chip (in 2 separate chips a somewhat lower price is possible to get). The APU in the PS4 and the Pro is only around 250mm2, so just one quarter of the size (don't forget it's squared) that the chip in the leak would take, in fact the GPU alone would be larger than that.

Add to this 24GB not-yet-released (and thus more expensive, at least early on) GDDR6 Memory (GDDR6 is based on DDR4 btw), even with the dropping memory prices that will be around another 100$ for sure early on. Add all the other components, assembly and shipping, and you're getting close to 1000$ even without paying consumer prices.

GCN  CU are not limited to 64, but Stream processor does (current one is 64 rop's) and it might change (double it in Navi ) or even if it's not they can just  added more CU (scalability ). 

GCN CU are depend on how much chip size they want to put on it while keep it cost and TDP down. Remember with 7nm node it will double down the amount of nodes on the same chip size (PS4 to PS4 pro able to double size the amount of CU) 

So PS5 might able to have double the amount of CU on the same size as PS4 pro chip (polaris) or double the amount clock speed of the maximum clock speed of polaris chip.

The price of APU i want you watch Jim from Adored TV channel on Youtube https://www.youtube.com/watch?v=qgvVXGWJSiE&t=810s

GDDR 6 base price will be different, depend on how much Sony ordered , ( we are talking about a sum size amount of GDDR 6 unit not just a thousand unit) for 5 to 6 years PS5 production . The same thing happen with GDDR5 , it's very expensive but the price is more cheaper because it's a different kind price and deal. 

Not that I'm aware of.

A Vega 64 has 64 ROPs that's true, along with 4096 Unified Shaders, 256 TMU and organized into 64 CU. I think remembering the actual limit comes from the Shader logic limiting those to 4096, but due to how it's organized that results into 64 CU, while the ROPs could be increased. Hence why there are rumors about an increased number of ROPs in Navi but no increase in CU.

14nm to 7nm doesn't half the size, even though the naming scheme implies it. But those names are decoupled from the actual sized since over 15 years now (since they got smaller than the wavelength of the visible light, which is around 300nm in fact). Vega VII has almost the same amount of transistors as a Vega 64 but is actually only 32% smaller. I calculated the 500mm2 by taking the Vega VII plus 65% of the size of a Zen Cpu, which would be 456mm2. But to reach those 14TFlops, Navi would need quite a few extra functions, so I added 10% to give space for those, which results in 500mm2.

PS4 Pro chip is about the same size as the original PS4 chip, which is about 350mm2. A Vega VII, which is produced in 7nm, is already is 331mm2. How do you want to reach that power in a ~350mm2 package if 95% is already occupied by the GPU - and that GPU isn't even strong enough for the leak. While Navi could be more powerful, it will need more transistors, and thus die space, for that. Getting that much power out of such a small chip is pretty much impossible with the 7nm process.

I do often watch AdoredTV (I love AMD and all my desktop PCs had AMD CPU except one (the original Pentium MMX, as AMD had nothing to counter that. I had an AM286, AM486 DX-40, K6-II, Athlon 64 3000+ (sadly only on 754 board, so I couldn't upgrade without a new board) and finally an Athlon X4 635), but I don't believe his prices. They would not unly undercut Intel's CPUs by an unreasonable large margin, but also undercut their own chips by a large margin, which doesn't make sense. They wouldn't be able to sell any of their older chips anymore unless they cut the prices by over 60% in some cases, which is insane. AMD would ruin themselves with those prices, so don't expect AMD to practically give their chips away.



Pemalite said:

Ray Tracing tends to inherently be compute limited, which ironically is one of the big strengths of Graphics Core Next.
I don't think AMD will spend transistors on fixed function blocks for Ray Tracing tasks, but simply make their shader pipelines more flexible to take on the task instead, then as they scale up in shader counts, then performance in old/current games get a boost as well... Should also mean that porting Ray Tracing to older GCN GPU's will likely be a little more feasible.

In saying that, various games have been dabbling in Ray Tracing for years... But RTX Ray Tracing just stepped it up a big level.

It definitely can be since you end up shading indirect contributions to light that don't appear within the camera but the other big bottlenecks to ray tracing is BVH traversal and ray-triangle intersection tests so it's ideal that they be fixed function for higher performance and efficiency ... 

There could be some specialized instructions for making these things optimal but doing so could lead to higher power consumption. During this generation, even though some cases of GPU compute rasterization could be made it was still arguably more performant to use the rasterizer on the consoles ... 

Let's at least figure out a way to get rid of the rasterizer first before we even tackle ray tracing since it's a much harder problem. Without those specialized instructions, I don't think ray tracing could be all that performant on older parts ... 

We'll never be able to get rid of the texture sampler in like 10 years, in maybe 20 but even then having them fixed function will give programmable shaders a good run for their money too ...



Sounds like a reach honestly.



Just a guy who doesn't want to be bored. Also

Around the Network

I would be extremely pleased with those specs, by the looks of it. Hot damn, 14 TFLOPS. That even beats out my PC which has maybe 8 TFLOPS.



Yeah, that Q1 2020 launch pretty much marks it as fake. So we're going to have an ultra powerful PS5, AND it launches early? This would be like PS4 Pro launching Q1 2013 or even 2012. Way too good to be true.



haxxiy said:

Edit - AMD released a 5 TFLOP GPU in  January 2011 for the same 375W as the Vega 64. Which, again, means little since Terascale was so bad it's likely much worse than the 4 TF GPU in the PS4 Pro.

Actually March 2011 (March 8 to be exact), and it's a dual GPU, the Radeon HD 6990, so those 5 TFlops are highly theoretical

Terascale was actually not nearly as bad as you seem it to remember, especially the 4xxx and 5xxx series did compete pretty well with NVidia. In fact, at the time AMD was making fun of NVidia being so unefficient and running hot (Termi, anyone? Here in a Hitler rant ^^) that they even made videos about it (can't find them now). GCN was of course much better, but Terascale wasn't that bad either.

You could also have come with the 7990, also a dual GPU, but with 8TFlops in May 2013. Or that a 12TFlops GPU existed in 2014 already (R9 295X2, 500W TDP). Or a 16TFlops GPU in April 2016 (Radeon Pro Duo, 350W TDP). All of them are dual GPU, btw, which makes the TFlops highly theoretical.

Last edited by Bofferbrauer2 - on 12 March 2019

Are they gonna show "target" demos again?

Using CGI?



Yeah I'm just thinking that with how games are getting so big the old HDD are up to the task anymore m.2 SSD support would be strange if Sony doesn't support SSD better this time.