Quantcast
PS5: Leaked Design And Technology [RUMOUR]

Forums - Sony Discussion - PS5: Leaked Design And Technology [RUMOUR]

Intrinsic said: 
EricHiggin said: 

What about before the PS3 to PS4 transition? What about PS2 to PS3 or PS1 to PS2? Could PS4 have been more powerful? Could they have launched at a $499 price point? Could PS4 have been given a larger case and better cooling? Were PS and MS begging AMD to find time to work with them and not price gouge them? It's not as simple as what can be done in terms of just hardware.

Yes, its not as simple as what can be done with just hardware. But since we are all making educated guesses here, thats the best way to go about it. The begging thing and price gouging thing aren't quantifiable. 

As for launch prices, we ca also make very informed guesses at what to expect with a $400/$500 box.

No they are guesses as well, and any company who's doing better now than they were before, especially because they have the best overall product available in their market, isn't likely to make such a good deal this time around. It still may be a good deal based on the options available, but to assume PS5 parts are sold to PS at the same rates as PS4, is being pretty darn optimistic.

Informed based on what we can typically assume, but do we go by humbled 2013 PS or the world is our's 2006 PS? Do we assume selling at cost remains the name of the game, with MS looking stronger and stronger behind the scenes, and PS burying money since their vaults are full, or do we assume another PS3, or maybe in between? What will the new PS management decide? PS4 is still $299 and Pro is still $399. How would a streaming console from MS or PS effect the dedicated hardware in terms of specs and price? I see way more questions that need to be answered to get a strongly informed idea of what might be coming next gen, and when exactly?

Intrinsic said: 
EricHiggin said: 

With Nvidia and their RTX line, what are the odds AMD has no idea that was in the works, and isn't aiming for something similar? Didn't Cerny mention ray tracing being the holy grail or something like that? What if they try to partially implement that and use a fair amount of resources to push that, instead of everything else they could? Would that also fit under next gen? We all know they like their buzz words. 4k, HDR, so why not ray tracing?

Another question would be how much more powerful than 4.2TF or 6TF is really needed to make a worthwhile jump, in comparison to previous gens? If 4.2TF was worthwhile after just 3 years, why assume they would jump to 14TF after another 3? 10TF would be another 2.3X jump.

Thats actually what I meant by "next gen architecture". There would be no doubt modifications or additions to what makes up each compute unit in the GPU. But those modifications wouldn'gt need to come at the cost of raw old school GPU power. For example, even though the CUs in the PS4 to PS4pro  GPUs doubled from 20 to 40, half of the CUs in the PS4pro are noticeably bigger than other half.   

Lol.... this thing again. Every gen we look at graphics and people say we dont need much for the next gen or that this is as good as its gonna get then boom.... we see horizon or GOW and minds are blown. Anyways, all going from 1.8TF to 4.2TF did just after  3 years was just bump up the resolution. You really don,t think the only dfference between PS4 and PS5 games will be higher a rez do you?

RTX has separate hardware features for ray tracing, and those take up space that could be filled with the typical old skool GPU power. AMD may have their own approach, but it will likely take up some die space regardless.

People were also saying the PS4 APU was weak overall in terms of it's specs (not for an APU specifically though) and many of them have been fairly surprised and impressed by what the devs have been able to do with that 'weak' APU. Now if PS wants to really make some more leaps instead of upgrades, they will likely need more than just 10TF next gen, but do they want to make that large of a leap? I myself don't believe they called the Pro a "mid gen upgrade/refresh" by accident.

Intrinsic said: 
EricHiggin said: 

High yields is quite important for a cheaper high volume product like a console. The larger and more complex, the worse the yields. Making sure the fab can fill the demand that product will have is also as important, whether it be yields or capacity. That last thing PS wants is a PS5 flying off shelves, with people constantly complaining they can't get one. If you forecast 10 million sales, but will only be able to produce 5 million due to the fab, that's a pretty big problem. There are other ways, but the CPU/GPU/APU are the prime factor. It's no coincidence that Pro and slim came out when they shrunk from 28nm to 16nm. Will PS celebrate their 25th anniversary?

This is not how yeilds work. As funny as it may sound, it can only impact price not volume.

If your yields are better, you will end up with more usable dies, which lowers the price per die, per wafer. If your yields are poor, you end up with less usable dies, which raises the cost per die, per wafer. That would mean if they didn't have enough volume due to poor yields, they would either need to install more production lines and eat the cost, or tell PS they couldn't make enough to fill the forecasted orders, so use another higher yield process instead. The companies don't swap all fabs at once to the new node. As the new node yields and orders increase, they install more and increase production to offer a higher volume of cheaper product.



The Canadian National Anthem According To Justin Trudeau

 

Oh planet Earth! The home of native lands, 
True social law, in all of us demand.
With cattle farts, we view sea rise,
Our North sinking slowly.
From far and snide, oh planet Earth, 
Our healthcare is yours free!
Science save our land, harnessing the breeze,
Oh planet Earth, smoke weed and ferment yeast.
Oh planet Earth, ell gee bee queue and tee.

Around the Network

There was recently an interview with a AMD rep and two question were interesting for us regarding next-gen console. It looks like ps5 and Next Xbox will have a chiplet design instead of a SOC/APU.

 

IC: With chiplets connected via IF on Rome, if a customer wanted a semi-custom design with different IP, such as a GPU or an AI block or an FPGA, would that be possible? (Say for example, a console?)

MP: Our semi-custom group is wide open to talk to customers to brainstorm! What excites me about the chiplet approach is that I think it’s going to disrupt the industry. It’s going to change the way the industry dreams of different configurations. Some might be right, and I can guarantee that someone will conjure up ten other ones that we didn’t think of! Honestly I think it is a disruptive force that is just nascent, just starting right now.

IC: With IF on 7nm, it offers 100 GB/s GPU to GPU connectivity. One of your competitors has something similar which allows both GPU-GPU and CPU-GPU connectivity. Currently with Rome, PCIe 4.0 has been announced from CPU to GPU but not IF. What has AMD’s analysis been on that CPU to GPU link?

MP: We haven’t announced applying the IF between the CPU and GPU and while it is certainly feasible, it is likely just dependent when workloads could truly leverage that protocol being applied, when the full coherency is required across both CPU and GPU. It is certainly feasible, but we haven’t announced it at this time.

 

Basically what this means is that Sony and Microsoft can pump in more Gpu cores into their next console. Something like a 88CU clocked at 1400mhz giving 15,7 Teraflops seems very feasible. Probably Vivster or Pemalite can give a better explanation what chiplet design is.

 

The interview:

https://www.anandtech.com/show/13578/naples-rome-milan-zen-4-an-interview-with-amd-cto-mark-papermaster



"Donald Trump is the greatest president that god has ever created" - Trumpstyle

6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Trumpstyle said:

There was recently an interview with a AMD rep and two question were interesting for us regarding next-gen console. It looks like ps5 and Next Xbox will have a chiplet design instead of a SOC/APU.

Basically what this means is that Sony and Microsoft can pump in more Gpu cores into their next console. Something like a 88CU clocked at 1400mhz giving 15,7 Teraflops seems very feasible. Probably Vivster or Pemalite can give a better explanation what chiplet design is.

While that sounds like a good idea at first, it really isn't practical for something like a console which has to be cheap to manufacture. Chiplet designs require much more time to design, it requires interfaces between the chiplets (and this at very high speeds) more testing individual components, and some other, more obscure things. In the end, the price will be too high so I expect a monolithic chip, again. One chip still is the most cost effective solution for products that have to be cheap to manufacture.



drkohler said:

While that sounds like a good idea at first, it really isn't practical for something like a console which has to be cheap to manufacture.

The whole idea of the chiplet design is to reduce manufacturing costs.
Things like I/O, memory controllers and so on... Don't scale down in manufacturing nodes very well... So it makes sense to keep them on an older, more mature process that is cheaper.

drkohler said:

Chiplet designs require much more time to design, it requires interfaces between the chiplets (and this at very high speeds) more testing individual components, and some other, more obscure things.

AMD has a team dedicated to that very task.
But now that Infinity Fabric is a known quantity... It's very easy for AMD to build that out and take full advantage of it very quickly.

drkohler said:

 In the end, the price will be too high so I expect a monolithic chip, again. One chip still is the most cost effective solution for products that have to be cheap to manufacture.

I am not willing to make an assumption either way until I have more information.

Trumpstyle said:

There was recently an interview with a AMD rep and two question were interesting for us regarding next-gen console. It looks like ps5 and Next Xbox will have a chiplet design instead of a SOC/APU.

It's a "possibility". - Awhile ago there was a thread where people were making guesses at what the next gen consoles would have... And Ryzen+Infinity Fabric was mentioned as one approach that could be taken.

The big advantage is scalability... Microsoft and Sony could with relative ease build an entire lineup of consoles with differing GPU capabilities whilst keeping the CPU+Chipset+I/O+Ram identical.


Trumpstyle said:

IC: With chiplets connected via IF on Rome, if a customer wanted a semi-custom design with different IP, such as a GPU or an AI block or an FPGA, would that be possible? (Say for example, a console?)

MP: Our semi-custom group is wide open to talk to customers to brainstorm! What excites me about the chiplet approach is that I think it’s going to disrupt the industry. It’s going to change the way the industry dreams of different configurations. Some might be right, and I can guarantee that someone will conjure up ten other ones that we didn’t think of! Honestly I think it is a disruptive force that is just nascent, just starting right now.

This was always the case, Anandtech just got clarification.

Trumpstyle said:

IC: With IF on 7nm, it offers 100 GB/s GPU to GPU connectivity. One of your competitors has something similar which allows both GPU-GPU and CPU-GPU connectivity. Currently with Rome, PCIe 4.0 has been announced from CPU to GPU but not IF. What has AMD’s analysis been on that CPU to GPU link?

MP: We haven’t announced applying the IF between the CPU and GPU and while it is certainly feasible, it is likely just dependent when workloads could truly leverage that protocol being applied, when the full coherency is required across both CPU and GPU. It is certainly feasible, but we haven’t announced it at this time.

Obviously playing coy. I don't see no reason why that approach couldn't be taken.


Trumpstyle said:
Basically what this means is that Sony and Microsoft can pump in more Gpu cores into their next console. Something like a 88CU clocked at 1400mhz giving 15,7 Teraflops seems very feasible. Probably Vivster or Pemalite can give a better explanation what chiplet design is.

Because the individual chips are smaller, they can get better yields, which means lower costs on the latest manufacturing process.
So in theory it should mean more CU's.



--::{PC Gaming Master Race}::--

wonder when sony and microsoft will reveal the next generation...one thing is sure, we will hear many rumors about in the next months



Around the Network
ZODIARKrebirth said:
wonder when sony and microsoft will reveal the next generation...one thing is sure, we will hear many rumors about in the next months

I'll be surprised if at least one of the 2 of them doesn't announce their next console by E3 at the latest personally. 



shikamaru317 said:
ZODIARKrebirth said:
wonder when sony and microsoft will reveal the next generation...one thing is sure, we will hear many rumors about in the next months

I'll be surprised if at least one of the 2 of them doesn't announce their next console by E3 at the latest personally. 

very likely, because the big guns will always be shown at e3 and i don't believe they want wait for 2020 for reveal



TranceformerFX said:
Here we go again with the nothing-burger rumors of the PS5... I remember back in 2016, there was PS5 media hype - it's now almost 2019 and the media got it all wrong ever since then.

The PS5 IS NOT coming in 2019 - Death Stranding, Last of Us 2, and Ghost of Tsushima will need to have launched before the PS5 hits shelves. The PS5 will most likely come out on Nov 2020.

You also can't say for sure that all those titles are indeed PS4. They can have been shifted for the PS5, and you wouldn't know about it. But IMO, a 2019 PS4 is unrealistic.



Pemalite said:
atoMsons said:

This doesn't make sense for this argument. Strange blanket statement.

It makes perfect sense.

atoMsons said:

A CPU only provides a bottleneck in severe cases and there isn't one on the PS4, or the XBO.

Depends, I can point to a ton of games where the CPU is a bottleneck on the Xbox One and Playstation 4.
The CPU bottleneck will shift depending on the game itself and sometimes even the scene that is being displayed on the screen.

atoMsons said:

It's majority of the GPU to produce frames for a video game, 3D pipeline rendering.

The CPU assists in preparing those frames you know.

atoMsons said:

A CPU never provides 60 frames. A CPU is terrible at rendering 3D pipelines.

The CPU assists at rendering in many game engines... It was common especially in the 7th gen.
Shall I point out the rendering techniques the CPU was doing?

atoMsons said:

You clearly haven't any idea why a GPU bottleneck happens.

That is a bold assertion.
I was obviously "dumbing down" my rhetoric to make it more palatable for less technical persons that frequent this forum, if you would like me to stop, I would be more than okay to oblige and start being more technically on point?

atoMsons said:

The CPU is responsible for real-time actions, physics, audio, and a few other processes. If the bandwidth can't match that of the GPU, a bottleneck happens and you lose frames that you can actually use. Think of a partially closed dam. All of the sudden the data can't flow fast enough through the dam(CPU) because of a narrow channel. 

Yawn.
The CPU is responsible for more than that... And you should probably list them, otherwise it is a little hypocritical if you are going to complain about my statement not being fully fleshed out and you go and do the same.
 

atoMsons said:

Now, 60 FPS is a GPU issue. That simple. This isn't a E8500 running a 1080 Ti. 

It is a GPU and a CPU issue. - Sometimes even a RAM issue.

atoMsons said:

PS: Flops ARE everything. It gives a good baseline for performance, even outside of similar architecture in comparison. Just not on a 1:1 ratio in that case (per say NVIDIA/RADEON).

Bullshit it's not everything.
FLOPS or Single Precision Floating Point Operations... Is a Theoretical number.

By that admission alone, Flops is irrelevant... Not only are they irrelevant.. But Flops tells us absolutely nothing about the hardwares actual capability, it doesn't tell us the amount of bandwidth a chip has, it's geometry capabilities, it's texturing capabilities, whether it employs any culling to reduce processing load, whether it has various compression schemes like S3TC or Delta Colour Compression, it tells us nothing of it's quarter floating point/double floating point/integer capabilities... It tells us absolutely nothing.
It's just a theoretical number that is calculated by taking the number of pipelines * instructions per clock * clock.

********************

I will try and keep this as simple as possible... But lets take the Geforce 1030.

DDR4: 884.7Gflops.
GDDR5: 942.3Gflops.

That is a 6.5% difference in Gflops... And you said flops is everything.
And yet we get to the crux of the issue. Gflops doesn't tell us everything else about a GPU, only a theoretical component.
In short... The DDR4 version is often less than half the speed of the GDDR5 version.

But don't take my word for it: https://www.techspot.com/review/1658-geforce-gt-1030-abomination/

****************

Or hows about a different scenario? (There are so many examples I can do this all day.)

Hows about we grab the Terascale based Radeon 5870 that operates at 2.72 Teraflops? It should absolutely obliterate the Radeon 7850 that operates at 1.76 Teraflops, right? That's almost a Teraflops difference huh? Both AMD based.
And yet... Again... Flops is irrelevant as the Radeon 7850 often has a slight edge.
But don't take my word for it: https://www.anandtech.com/bench/product/511?vs=549

Do you want some more examples of how unimportant flops are? I mean, I haven't even started to compare nVidia against AMD yet. Flops is everything right?

You don't read well. You try to be so defensive you miss the point and don't realize you're wrong. Only and last time responding to you.

1. You can't point out many cases on the PS4/XBO with CPU limitations to frames. Those "limitations" are on the AI and on other things you don't really see. You know nothing. Consoles are closed hardware, where the floating point is dead set for developers. You clearly don't understand my dam example, which is as simple as I can make for a noob. Bandwidth.

2. Why do you say dumb things like "The CPU assists in preparing those frames you know." Well obviously, if you read my post, you don't have to say anything like that. It's obvious I know this as I listed many processes the CPU handles in gaming. But when drawing 3D pipelines, a CPU hardly does any of it. How do you miss my point? I am baffled.

3. The CPU again, doesn't render very many frames. It's EXTREMELY poor at doing it, that is why it's the brain that sends information to the GPU. The CPU is also responsible for telling the GPU what to do. While the GPU job is rendering what we see on screen at any given time. Remember we are simplifying here. The CPU sees and describes the different objects on screen, their location and other things. That information is converted by said GPU. A GPU's cycle is far more demanding than a CPU's. This is where you see a "bottleneck". If a CPU can't tell what the GPU should do fast enough, workload is limited in a gaming production role.

4. Frames are NOT a RAM issue today. So many developers are lazy and don't want to load in textures, and other things properly. Poor optimization. Can't give you this point at all. This is painfully clear in console gaming. Moving on.

5. Ummm... "The CPU is responsible for more than that... And you should probably list them, otherwise it is a little hypocritical if you are going to complain about my statement not being fully fleshed out and you go and do the same."  Mmm.. Yeah, I listed some areas. But again, you suck at reading and comprehension. Dropping this point. Yawn.

6. Again FLOP are a good indication of similar architecture. WHAT DO YOU NOT GET? And if they aren't the same arch, or close, it just isn't a 1:1 ratio. Again, WHAT DO YOU NOT GET? But it gives a very good estimate on the strength of a GPU. Different architectures work in a different way, that's why it can't be a 1:1 direct correlation, yet the Flops themselves aren't meaningless. Compare Flops of new arch to that of the older. Notice how Flops get higher all the time? So yeah, you can easily draw a hypothesis based on Flops, even a pretty damn accurate one once you take in a few other form factors. Moving on.

7. Not sure why you are talking about GPU memory. Memory works differently. Stop Googling arguments. Irrelevant.

8. I'm done. I gain nothing from this. You compared the 5870 and 7850. It's like nothing I said was even processed. I found it funny when you referenced that they are both from the same company. Oh boy... The same company doesn't use the same chips forever and put on turbo and stickers to make it go faster. LOL!



atoMsons said:

1. You can't point out many cases on the PS4/XBO with CPU limitations to frames. Those "limitations" are on the AI and on other things you don't really see. You know nothing. Consoles are closed hardware, where the floating point is dead set for developers. You clearly don't understand my dam example, which is as simple as I can make for a noob. Bandwidth.

Sure you can.

If you capture a frame... And see a ton of GPU accelerated particles effects on screen and note that framerate is tanking... We can ascertain that the CPU is likely not the driving factor.
However... If the game doesn't leverage GPU accelerated particle effects and uses the CPU to drive the physics and lighting processing for said particles... We can ascertain that the CPU is likely the limiting factor.
And once the particles are done and dusted and are no-longer on screen, the bottleneck will likely shift away from the CPU.

It's a pretty basic concept really.

atoMsons said:

2. Why do you say dumb things like "The CPU assists in preparing those frames you know." Well obviously, if you read my post, you don't have to say anything like that. It's obvious I know this as I listed many processes the CPU handles in gaming. But when drawing 3D pipelines, a CPU hardly does any of it. How do you miss my point? I am baffled.

I didn't miss your point. I chose to ignore it as it wasn't 100% on point as you didn't elaborate on every single aspect.

atoMsons said:

3. The CPU again, doesn't render very many frames. It's EXTREMELY poor at doing it, that is why it's the brain that sends information to the GPU. The CPU is also responsible for telling the GPU what to do. While the GPU job is rendering what we see on screen at any given time. Remember we are simplifying here. The CPU sees and describes the different objects on screen, their location and other things. That information is converted by said GPU. A GPU's cycle is far more demanding than a CPU's. This is where you see a "bottleneck". If a CPU can't tell what the GPU should do fast enough, workload is limited in a gaming production role.

No. You are misconstruing my statements to be something it is not.

The CPU assists in rendering... The days were the CPU is tasked for all rendering is long over... For obvious reasons. (I.E. The CPU is good at serialized processing, not parallelized like GPU's and Graphics.)
However... The CPU still assists in some rendering tasks... Case in point: Morphological Anti-Aliasing was often done on the CPU during the 7th gen, because it was cheap and free'd up the GPU.

So the CPU does allot more than what you so eloquently describe.

atoMsons said:

4. Frames are NOT a RAM issue today. So many developers are lazy and don't want to load in textures, and other things properly. Poor optimization. Can't give you this point at all. This is painfully clear in console gaming. Moving on.

I have already proven you wrong on this point with evidence. (Geforce 1030.)

Ram is about more than just about the data it holds you know.

atoMsons said:

6. Again FLOP are a good indication of similar architecture. WHAT DO YOU NOT GET? And if they aren't the same arch, or close, it just isn't a 1:1 ratio. Again, WHAT DO YOU NOT GET? But it gives a very good estimate on the strength of a GPU. Different architectures work in a different way, that's why it can't be a 1:1 direct correlation, yet the Flops themselves aren't meaningless. Compare Flops of new arch to that of the older. Notice how Flops get higher all the time? So yeah, you can easily draw a hypothesis based on Flops, even a pretty damn accurate one once you take in a few other form factors. Moving on.

I have already proven you wrong on this point, I provided evidence. Case in point: Geforce 1030 DDR4 vs GDDR5.
Same architecture. Same GPU. Less than half the performance. Go figure.


atoMsons said:

8. I'm done. I gain nothing from this. You compared the 5870 and 7850. It's like nothing I said was even processed. I found it funny when you referenced that they are both from the same company. Oh boy... The same company doesn't use the same chips forever and put on turbo and stickers to make it go faster. LOL!

You stated that flops was everything.
Obviously the evidence says you were incorrect.

And considering you haven't provided counter-evidence, we can safely assume at this point that I am correct.

*****

I will also ask you to refrain from making personal jabs in future.



--::{PC Gaming Master Race}::--