By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Rumor:PS5 & Anaconda Scarlet GPU on par with RTX 2080, Xbox exclusives focus on Cross gen, Developer complain about Lockhart.UPDATE: Windows Central said Xbox Anaconda target 12 teraflop

 

What do you think

I am excited for next gen 22 61.11%
 
I cannot wait to play next gen consoles 4 11.11%
 
I need to find another th... 2 5.56%
 
I worried about next gen 8 22.22%
 
Total:36
Trumpstyle said:

For Ram configuration I think PS5 have 16GB Vram 256-bit bus (572GB/s memory speed) + 4-6GB ddr4 and Anaconda either 14GB Vram 320-bit bus (560GB/s), 10GB Vram available for games or 16GB Vram 384-bit bus (672GB/s Memory bandwidth), 12GB Vram for games.

On a 384-bit bus it would be 18gb Vram not 16gb.

Likewise you wouldn't get 14gb on a 320-bit bus either, 15gb.



Around the Network
drkohler said:
Trumpstyle said:

For Ram configuration I think PS5 have 16GB Vram 256-bit bus (572GB/s memory speed) + 4-6GB ddr4 and Anaconda either 14GB Vram 320-bit bus (560GB/s), 10GB Vram available for games or 16GB Vram 384-bit bus (672GB/s Memory bandwidth), 12GB Vram for games.

These numbers make no sense at all, no sense at all.

You're the hardware expert, figure it out...

Barkley said:
Trumpstyle said:

For Ram configuration I think PS5 have 16GB Vram 256-bit bus (572GB/s memory speed) + 4-6GB ddr4 and Anaconda either 14GB Vram 320-bit bus (560GB/s), 10GB Vram available for games or 16GB Vram 384-bit bus (672GB/s Memory bandwidth), 12GB Vram for games.

On a 384-bit bus it would be 18gb Vram not 16gb.

Likewise you wouldn't get 14gb on a 320-bit bus either, 15gb.

My numbers are accurate and I'm sure you can't do 384-bit bus on 18GB Vram as 1.5GB gddr6 Vram sticks are not available.



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Trumpstyle said:
drkohler said:

These numbers make no sense at all, no sense at all.

You're the hardware expert, figure it out...

Barkley said:

On a 384-bit bus it would be 18gb Vram not 16gb.

Likewise you wouldn't get 14gb on a 320-bit bus either, 15gb.

My numbers are accurate and I'm sure you can't do 384-bit bus on 18GB Vram as 1.5GB gddr6 Vram sticks are not available.

Digital foundry - "However, more pertinently, memory prices are increasing dramatically and a move to 12GB, 18GB or 24GB (all are possible on a GDDR6 384-bit memory interface)" - link

Your numbers aren't accurate.



drkohler said:
Trumpstyle said:

For Ram configuration I think PS5 have 16GB Vram 256-bit bus (572GB/s memory speed) + 4-6GB ddr4 and Anaconda either 14GB Vram 320-bit bus (560GB/s), 10GB Vram available for games or 16GB Vram 384-bit bus (672GB/s Memory bandwidth), 12GB Vram for games.

These numbers make no sense at all, no sense at all.

You mix 1GB and 2GB Vram sticks and run them in FLEX MODE



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Barkley said:
Trumpstyle said:

You're the hardware expert, figure it out...

My numbers are accurate and I'm sure you can't do 384-bit bus on 18GB Vram as 1.5GB gddr6 Vram sticks are not available.

Digital foundry - "However, more pertinently, memory prices are increasing dramatically and a move to 12GB, 18GB or 24GB (all are possible on a GDDR6 384-bit memory interface)" - link

Your numbers aren't accurate.

Yep I know Eurogamer said that, but 1.5GB Vram sticks are not available, 12x1.5 = 18GB Vram. But there is no 1.5GB Vram available.



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Around the Network
Trumpstyle said:
Barkley said:

Digital foundry - "However, more pertinently, memory prices are increasing dramatically and a move to 12GB, 18GB or 24GB (all are possible on a GDDR6 384-bit memory interface)" - link

Your numbers aren't accurate.

Yep I know Eurogamer said that, but 1.5GB Vram sticks are not available, 12x1.5 = 18GB Vram. But there is no 1.5GB Vram available.

Look, there's an easy way to see that your numbers aren't a valid configuration.

Valid configuration: XBO X, 12gb 384-bit bus

12,288 divided by 384 = 32 (CLEAN DIVISION)

Your Configuration: 16gb 384-bit bus

16,384 divided by 384 = 42.666 recurring. Not a clean division, not valid.

Your numbers aren't right, they don't make sense.



Pemalite said:
HollyGamer said:

Due stop mixing all the quotes 

No.

HollyGamer said:

Due stop mixing all the quotes 

"Many effects are driven by the CPU. - Thus you can retain the same game design with a weaker CPU." ??? Your statement is conflicted . If manny effect are driven by CPU that having a weaker CPU will make scalability game design imposible on weaker CPU. Thus i said that having Xbox One (jaguar)  for designing game  will be hampering all the unique feature and tech from Ryzen that could be utilized. I am not talking about just grapich alone but game design consiste, game concept, level, map, AI, Physicist, etc etc. (you agree with this)

It's called turning those effects off to increase CPU headroom or shifting those effects over to another processor. - The game fundamentally operates the same.

Take for example GPU accelerated particle effects, typically these were done on the CPU (Actually, most games still put it on the CPU), that removed the burden form the CPU in idTech powered games so that the CPU can be tasked with other jobs or makes it easier to hit 60fps if you are CPU and not GPU bound.

Conversely, many games use the CPU for post-process effects, like Anti-Aliasing, Blur and more, you can change the type of AA your game uses in order to reduce the CPU burden.

Console games are all about a game of "Compromise". - There is always something you can reduce/remove/move to another chip in order to reduce your hardware demands, which is why developers are able to take a game that runs on a 2.3Ghz 8-core Jaguar CPU in the Xbox One X and scale it downwards to the 1Ghz 4-Core Switch CPU.
It's fundamentally the exact same game.

HollyGamer said:

" What gives you the impression that Lockhart will just be a rebadged Playstation 4 Pro? " I never said that, it's from Jason Kotaku said Lockhart GPU is equal to PS4 pro according to his resource  ". But i am not saying it will have the same capability as PS4 pro in terms of overall performance or game design. And i am not complaining about Lockhurt.

Good. Because if you were asserting that Lockhart will be equivalent to the Playstation 4 in overall capability based on flops alone... Then you would certainly be highly misinformed.

HollyGamer said:

A scalability can work on game design agree if it's a scaledown , but scaling up no. But for graphic is possible but still required more time even if it's using an middleware and a good API . Scaling graphic  down from Anaconda to Xbox One is possible Scaling up games from Xbox One to Anaconda is easy but the games is just an Xbox One games.  Because Microsoft want parity a cross of their platform which is like how PC was (scale up on just resolution and Frame rates thus they insists with 120 fps /4k) , they don't care about next gen, no more next gen. Anaconda  will be just glorified PC if they want to focus on crossgen. 

Scaling up happens all the time. PC thrives on it.

Anaconda is using the exact same hardware base as the Playstation 5. Thus if Anaconda is a glorified PC, then so is the Playstation 5.

I would argue that the consoles are just semi-custom PC's at this point anyway, when looking at it from a hardware perspective.

HollyGamer said:

I just want they focus on Scarlet and leave Xbox One in the dust, Sony might do the same with PS5 but according to Jason they are ready to move their game design to PS5 as baseline. 

I would love for the umbilical cord to the Xbox One to be cut off. - But that likely has a customer base that could be 50~ million by the time Anaconda is fully entrenched in the market place. (I.E. A few years from now.)
You want those customers (and more) to transition over, Microsoft is good at making money so I am sure they will make the best decision for themselves either way.
Personally I couldn't care. PC games aren't any "lesser" for supporting old hardware, the PC still has the best looking games on the market if you have the hardware to push it.

Trumpstyle said:
We now have Jason Schreier, FLUTE leak, Microsoft E3 video, The verge saying Anaconda is above 10TF, a verified insider saying both PS5 and Xbox anaconda is above 10TF. Add 3 people who have contacts with game developers (Kleegamfan, Colin Mccarthy, Andrew Reiner) saying PS5 is above Anaconda, what conclusion can we draw?

That the discussion if PS5 or Anaconda is below 10TF is over, the people who thought below 10TF are defeated. The next battle is whether next-gen consoles are using TSMC N7P or 7nm+.

I'm on 7nm+, who's with me?

Flops are a useless metric, more so with next-gen consoles. You need to stop hanging off it.

For example, Flops doesn't account for the Ray Tracing performance of chips, for all we know Anaconda may have twice the Ray Tracing performance as the Playstation 5... And Ray Tracing is going to be the "big thing" next console generation.

Trumpstyle said:

Yes Lockhart should land exactly at 4TF (18CU's, 1,8ghz) but on benchmarks a 5TF Navi loses to radeon 580 for some reason, even though Eurogamer done tests showing Navi has 50% higher Performance/Teraflops, we just haft to see what happens.

Edit: Ah figured it out, Navi have 20% faster TF compared to Polaris architecture that is in Xbox one X and radeon 580. The 50% number is compared to original Gcn architecture in Ps4 and Xbox one. 

It's because you are basing all your assumptions on flops.

Kristof81 said:

It's not about how many particular GPUs they produced/sold last year. In the PC market you have new GPUs/architectures coming out almost every year and trends change constantly. The same Navi chip is here to stay for at least 5 years, without any major changes in demand. This unchanged design helps a lot in bringing the single unit price down as you can forecast and plan production for years to come with ease. Each new chip introduces (or may introduce, depends how much it differs) new manufacturing process and that costs a lot of money. In other words, uncertainty creates cost. Also, no single 3rd party PC parts manufacturer have such purchasing power as Sony. They can easily sign a 5 year contract for delivery of 50+ million Navi chips, with a very high degree of certainty. I hostilely doubt that Asus, MSI and Gigabyte combined, could do the same thing for their 20 series GPU, or any single SKU in that matter.

We get new GPU's every year.
Architectures however last allot longer than that.

VLIW/Terascale lasted from 2007 with the Radeon 2900 series right up to the Radeon 6970 in 2011, 4 years isn't bad.
GCN went from a Radeon 7970 in 2012 right up to Radeon RX 580 in 2018, 6 years isn't bad.
Heck, it could be argued that NAVI is still GCN based as it still retains GCN's instructions, it's a hybrid GPU between GCN and RDNA 2 essentially.

As GPU's get larger, architectures get more complicated, the longer an architecture is likely to stick around as AMD and nVidia need to get a return on their R&D investment.

In general console manufacturers pay a lump sum to AMD for their console GPU design, then they pay for the manufacturing, AMD will assist (In return for some money) to shrink their designs down to a smaller manufacturing node.

The profit margins though on AMD's console chips is actually really small compared to a high-end GPU, but boy did they help keep AMD afloat during it's bulldozer days.

In saying that... There are roughly about 75-100~ million discreet PC GPU's sold per year split between nVidia and AMD that accounts from the lowest-end to the highest-end, the bulk of which are low-end and mid-range parts.

Ck1x said:

So that's great just talking about theoretical numbers but what I'm saying is there are plenty of other things that will effect achieving those numbers or higher. Currently all we have to compare to are AMD 12nm cards, so we aren't even factoring in transistor density over the 12nm process into the new CPU and GPU for these consoles. This will definitely allow for higher clock speeds and performance, not to mention the higher memory bandwidth these systems should have over current cards...

12nm is just a refined 14nm process. It's just advertising at it's best.
The Radeon RX 590 is AMD's only consumer 12nm GPU, which is basically an overclocked Radeon RX 580 which in turn is an overclocked Radeon RX 480... Normally a new node should bring with it better power characteristics, but the RX 590 still consumes more power anyway.



Yeah sorry I should have made the distinction of 12nm in comparison to 7nm+ is what I meant to say as far as transistor density goes...



Barkley said:
Trumpstyle said:

Yep I know Eurogamer said that, but 1.5GB Vram sticks are not available, 12x1.5 = 18GB Vram. But there is no 1.5GB Vram available.

Look, there's an easy way to see that your numbers aren't a valid configuration.

Valid configuration: XBO X, 12gb 384-bit bus

12,288 divided by 384 = 32 (CLEAN DIVISION)

Your Configuration: 16gb 384-bit bus

16,384 divided by 384 = 42.666 recurring. Not a clean division, not valid.

Your numbers aren't right, they don't make sense.

You must have missed my previous comment to Drohler, you have 12 gddr6 sticks. 8 of them are 1GB Vram and 4 of them are 2GB Vram, this gives 16GB of vram. 12 sticks x 32 = 384-bit bus, it's CLEAN. You then run them in FLEX MODE, 12GB Vram full speed for games, 4GB slow speed for OS.

Edit: The memory config is based on Microsoft E3 video where it shows them mixing 1GB and 2GB Vram sticks, you can't mix 1GB and 2GB Vram and run them at full speed so you haft to use FLEX MODE.

Last edited by Trumpstyle - on 09 December 2019

6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Pff looks like it's over, Anaconda > PS5
PS5 is 40CU clocked at 2ghz as oberon leak suggested (10,2TF).
Lockhart 4TF (18CU, 1,8ghz)
Anaconda 12TF (52CU, 1,8ghz)

It's over, this leak that I thought was 100% accurate turned out to be almost correct except for PS5.
https://imgur.com/gallery/i3TnTKk#Xli5Vxu (leaked jan this year)

I might edit my post, but looks like it's over, just wanted to do a quick post.



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Trumpstyle said:

You must have missed my previous comment to Drohler, you have 12 gddr6 sticks. 8 of them are 1GB Vram and 4 of them are 2GB Vram, this gives 16GB of vram. 12 sticks x 32 = 384-bit bus, it's CLEAN. You then run them in FLEX MODE, 12GB Vram full speed for games, 4GB slow speed for OS.

Of course I know what flex mode is. For a console, it's not CLEAN, it would be a kludge at best.

A SoC with six gddr6 controllers (=384bits) is expensive, a lot more than a chip with four or five controllers. Gddr6 controllers are not cheap when it comes to die area and power consumption. As an engineer, you want to get away with the fewest needed to do the job.

With 16GBit gddr6 parts, four controllers give you 16GByte, five controllers give you 20GByte, six give you 24GByte ram (all full speed at all times). You are suggesting that MS builds a very expensive six controller system with only 16Gbyte ram (not at full speed all times), while a significantly cheaper five controller system would give them 20GBytes, 25% more ram (at full speed any access). I don't think MS engineers/beancounters are anywhere near that dumb.