By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - PS5: Leaked Design And Technology [RUMOUR]

Intrinsic said: 
EricHiggin said: 

That's another reason why I think it's not Zen 2. I would say as low as 8TF and as high as 12TF, if it's going to launch around late 2019. I myself however assume 10 since it seems to be the middle ground. I would much rather have a beefier CPU portion than GPU if it requires meeting a certain price point. Slightly better PS4/Pro graphics are fine as long as they lock it at 60FPS when devs want more than 30.

I believe it will be alot more than 10TF. I have my money on at least 14TF. When sony went from 28nm to 16nm, not only did their CU count double the GPU clock went up by around 15% from the base console. The XB1x (getting rid of their esram on SOC had more room to work with and of course usng a better cooling solution, they practically quadripled the amount of CUs in their GPU and ran it at an even higher clock. 

As cool as that sounds, the XB1x GPU has only 4 CUs more than the PS4pro but is just clocked much higher. So if you use the XB1X as the baseline, simply going from 14nm to 7nm and keeping the clocks the same will at the very least mean you are by default going from 6TF to 12TF and this is not considerring any other architectural improvements made. 

14TF in 2019 seems like to big of a jump for a console that should sell for around $449 give or take. I could see 14TF if they skimp on the CPU again, but I can't see them going 30FPS for PS5, and I can't imagine they would risk later gen games not being able to hold close to 60, especially with possible large frame drops by then. Making the CPU plenty strong, and making the GPU just good enough, makes more sense to me. 6 or 8 strong acceptably clocked Ryzen cores, with around a 10TF GPU would be good enough for the initial launch model, assuming they plan on making a Pro version again mid gen.

XB1X has 6.0TF and can run some games in full 4k. Cerny said they feel they need at least 8.0TF for proper 4k. AMD themselves said something like 7.5TF I think. If PS5 has 10TF, that will be enough that they shouldn't have to worry about checkerboarding until later in the gen, which will be good enough if you only spent $399 in 2019, and can upgrade to Pro in say 2022.

Intrinsic said: 
EricHiggin said: 

If PS5 uses 7nm at launch, they have to hope that 5nm isn't more than 3 or 4 years afterwards for a slim or upgrade though. Intel still stuck on 14nm makes me wonder. Pro and slim got 16nm chips fairly early, but those were from TSMC. AMD was using GloFo and 14nm. Mind you, AMD is now going to use TSMC for the majority if not all Zen 2, 7nm chips, so it's hard to say, depending on if PS5 uses Ryzen, +, or Zen 2.

No they don't. If 4nm doesn't come along soonish its not thje end of the world. And there are other ways or things that contribute to price reductions than just using a smaller chip.

It's not the end of the world, but when you have always made a slim console, usually 3 to 4 years in, going without one, or a poor attempt because you can't shrink the power and cooling system, will be a downer. If PS was worried about this, I wouldn't be surprised to see them use 10nm at launch, and then 5nm if it's ready for slim, and if not then 7nm.

EricHiggin said: 

M.2 might make sense based on the leak info. You could have a 2TB HDD model, and a 1TB M.2 model, and let the customer choose if they would rather have more storage or more speed. The customers who buy the 1TB model could then simply spend another $50 to $100 on a 4TB+ external HDD if they want more space for cheap. They could probably also spend more and get a 2TB to 4TB (or larger in the future) 2.5" internal HDD to keep things sleek and simple. Whoever buys the 2TB HDD model, may even be able to wait a year or two for M.2 prices to go down, while storage goes up, and install one themselves for more speed.

That is unnecesary. First thing here is the chosen interface. SATA or M.2? 

SATA will mean the best they could ever get will be around 400MB/s. And thats if they put in a SATA SSD in the console. SATA 3 is raed for 600MB/s but thats not areal world number per say. Anif they are gonn put in a SATA SSD from launch then they might as well just go with M.2.

M.2 wll mean the best they could get is around 2GB/S+ speeds. But it also mean they could use a SATA based M.2 ssd in there allowing those that want to go faster (going from aropund 400MB/s sata to around 1.8GB/s nvme) upgrade their drive.

They honestly don't hve to put in a HDD bigger than 1TB in the console. All they have to do is spport external HDDs from day one. Its just all round better for the platform as far as future proofing goes if they go with an M.2 drive. Honestly and this may sound crazy, I can even see them soldering their storage directly onto the PCB and just support external storage on day one. 

1TB pcie 3 based nand flash storage soldered onto the PCB. Will make it eve cheaer for them to do it.

The future proofing and cost savings make sense on one hand, but no internal upgrading is going to be a massive step for PS. I know personally, I'd rather pay a little more to have the ability to swap internal storage. External mass storage isn't exactly a deal breaker, but I would much rather have internal. Even if the slot was just empty internal space for additional 2.5" storage.

fatslob-:O said:

EricHiggin said: 

It's not so much that I think it can't hit 1500MHz, I just find it hard to believe it's going to do so, with a high CU count, at around 10TF, in a PS4 sized console. Maybe it will, and if it does, that'll be something to boast about, if the cooling system is remarkably quieter this time around that is.

Really not that hard to imagine that they could since they have a new GPU microachitecture in hand and have 7nm to work with ... 

That's assuming it will definitely be on 7nm, and also assuming PS would rather focus on the absolute highest performance possible, without taking the console design and cost into account. PS4 could have squeezed out more performance if they would have made the shell, PSU, and cooling system larger, but they decided to go straight to 'slim' at launch and have the console sound like a runway sometimes.



EricHiggin said:

That's assuming it will definitely be on 7nm, and also assuming PS would rather focus on the absolute highest performance possible, without taking the console design and cost into account. PS4 could have squeezed out more performance if they would have made the shell, PSU, and cooling system larger, but they decided to go straight to 'slim' at launch and have the console sound like a runway sometimes.

In all honesty it probably will be on 7nm and with the whole logic foundry industry transitioning to EUV lithography which makes it even more likely since there will be some long term cost saving to be had ... 



fatslob-:O said:
EricHiggin said:

That's assuming it will definitely be on 7nm, and also assuming PS would rather focus on the absolute highest performance possible, without taking the console design and cost into account. PS4 could have squeezed out more performance if they would have made the shell, PSU, and cooling system larger, but they decided to go straight to 'slim' at launch and have the console sound like a runway sometimes.

In all honesty it probably will be on 7nm and with the whole logic foundry industry transitioning to EUV lithography which makes it even more likely since there will be some long term cost saving to be had ... 

TSMC 7nm isn't EUV at the moment though is it? I thought they were going to transition to 7nm EUV or make that transition at 5nm otherwise.



EricHiggin said:

TSMC 7nm isn't EUV at the moment though is it? I thought they were going to transition to 7nm EUV or make that transition at 5nm otherwise.

Their transitioning to EUV next year with their upgraded 7nm logic node which will be known as 7nm+ and it would coincide with the potential release of a new systems during the last quarter of the year ... 

Heck, Samsung is already there with EUV and products launching early next year based on it ... 



Areaz32 said:

Also, I am pretty certain that no one expected PS4 to do TLOU2 graphics back in 2012 when they released the 7870.

I actually have a high level understanding of the rendering pipeline of TLOU2. It's nothing I didn't expect.

Areaz32 said:

60fps is a CPU problem, not a GPU problem.

False. It is both a CPU and a GPU problem.
If the GPU doesn't have the capability to output at 60fps, it's not going to output at 60fps.
If the CPU doesn't have the capability to output at 60fps, then the GPU is also not going to be able to push out 60fps.

This is the frustrating thing I find with allot of console gamers who think that framerate is all about the CPU... And Flops is everything. It isn't.

Areaz32 said:

It isn't a conclusion I made through any fallacy. It is an educated suggestion as to how and why it happened. If you were to tell me that Remedy devs are incompetent on the tech side, I would call you a liar. This is the only way to explain why the PC version ran so poorly. The would clearly have fixed it given enough time, as they also did with a lot of the issues afterwards. Most windows store games were planned to be Xbox one only for a long time until Microsoft changed their tune.

Educated? Feel free to provide some citations.
Most Windows Store games run like ass anyway... Due to a myriad of reasons. - They also have a ton of technical limitations as well... Shall I educate you on that? More than happy to oblige... That isn't an issue to do with the PC or PC optimization as it's not an issue that tends to be reproduced on say... Steam.
Nor does it mean the Xbox version is the epitome of optimization prowess.

Areaz32 said:

Yep, it has a low-level API for developers that wish to leverage that level of control, but the problem comes in when you want to port it to the PC platform because then you can't directly port it without significant performance pitfalls. Simply put, some of the hardware intricacies of the xbox and playstation aren't available on ALL PC hardware (keyword here is "intricacies") so sometimes you can't rely on async compute or some specific number of scheduled wavefronts in your PC games, simply because you never know if it actually has all those ACE units, ROPs or ALU's or whatever.

False. All of it is false.

Areaz32 said:

Elaborate on "gobbling up more Ram than a desktop OS"  Most desktop OS's eat up more ram depending on how much ram it has. For instance, I am running 32gb ddr4 and it eats more than 10 gigs on caching etc. It will free that ram up if i actually start using it in a game, or some other application, but there you have it.

Basically, what you are stating is that the PC is making more efficient use of available resources in order to expedite operations.
That is a good thing.
But when it comes to crunch time, the PC will evict that cached memory in order to give priority to the application.

Again... Windows will happily operate on 2GB~ or less of DRAM. - The consoles you have no choice in the matter, they have a fixed amount in reserve that is unchanging.
Which means... Your original claim that PC OS's are more memory intensive than consoles is blatantly false... You are just unwilling to admit you were incorrect on your prior assertion.

I mean... Windows 7 is happy with 1GB of ram if the disk (I.E. SSD) is sufficient enough.


Areaz32 said:

Elaborate on what you mean when you say "more lean". I didn't imply that it is automatically better than what PC offers. What I am arguing is that utilizing different specific hardware in a specialized way is preferred over using random hardware, as random hardware would have to brute-force a ton of random stuff just to get the same result.

The PC isn't "random hardware". - There are standards that are adhered to...
And there are a fixed amount of companies, building the same iterative updated hardware on a yearly cadence... It's a known quantity at this point.

Your claim may have held some water back in the 90's/early 2000's when there was over a half dozen CPU companies and a half dozen GPU companies'.

Areaz32 said:

As an example, emulators are extremely inefficient.

Inefficient? No.
Do you even understand how an emulator even works?

Areaz32 said:

It is a similar principle. The emulation overhead itself can be lightweight but when it comes to executing the code that was built for specific hardware, then it becomes way slow.

Emulators works by taking an instruction from the original hardware and attempting to "reinterpret" it to a different hardware's instruction.
Sometimes the Emulator needs to take that single instruction and chop it up into multiple instructions and translate it to instructions the new hardware can understand for execution.

What that essentially means is that... A single instruction done per clock would take multiple clocks to perform on the different hardware. - That's not because of any special "overheads". - It's just the nature of emulation itself.

Microsoft got around that problem somewhat with the Xbox One emulating Xbox 360 games by taking a multitude of approaches.

For starters... The Xbox One has a few Xbox 360 features already baked into it's hardware... So that's one overhead reduction.
Microsoft also uses the power of virtualization to virtualize the Xbox 360 environment on the Xbox One... And abstracts the Xbox 360 hardware.
Some things is of course emulated, but not as much as you think.

And then of course... Microsoft repackages the games. - In short... It is how Microsoft was able to emulate 3x 3.2Ghz Hyperthreaded PowerPC cores on 6x 1.75ghz Jaguar cores.

Emulation, the typical approach the PC takes is a brute force approach... And would literally be impossible on consoles due to their anemically pathetic CPU's.

However, the Xbox One approach isn't impossible on PC either... In-fact some emulators take that exact approach as the original platform was an open one, not a closed one... Which means that the emulator developers could get the low-level information they need to build their emulators. - In-fact... One company leverages this to a degree on PC to repackage PC games... I will leave it to you to make an "educated" guess.

Areaz32 said:

A laptop I had from 2013 buckled hard in the attempt to render some enemies with a special specular mapping in the later stages of Persona 4 for instance. It dropped to under 10fps and it was a laptop with two GT 650m SLI and an i7. Granted these specs aren't anything special today, but it would still amount to several times the power of a PS2.

See above.

 

Areaz32 said:

I wasn't arguing that PC as a whole cannot achieve these features. I am just saying that developers cannot use these features on PC in any significant way, because they never know if the end user even HAS IT. 

 

Sure they can. And in history there has been many examples where developers have leveraged PC-specific features.
For example... Tessellation is something that only just became prevalent this console generation... However, the PC had that technology during the Playstation 2 era.

It is actually the reason why abstraction exists... To standardize specifications for developers and expose standard functions.

Areaz32 said:

The advantages have mostly shown itself in the first party space. Media Molecules game, for instance, has completely done away with the rasterization hardware (ROPs) and they are using a cutting-edge SDF solution for their graphics. If they didn't have those ACE units they would never have been able to go with this SDF solution. The Tomorrow Children is also a game that uses cascaded voxel cone tracing. No one else was doing that back in 2014. Here is a GDC PDF from early 2015 

...And?
Remember this on PC?
https://www.youtube.com/watch?v=00gAbgBu8R4

Areaz32 said:

The take away here is that, if this game was multiplat, it would most definitely mean that they would be forced to use more traditional technology. Unless they only want to target that very narrow segment of the PC market that had GPU's with those exact features.

No it doesn't.
The PC can do everything the consoles can do... They simply do it better.

Last edited by Pemalite - on 11 November 2018

--::{PC Gaming Master Race}::--

fatslob-:O said:
EricHiggin said:

TSMC 7nm isn't EUV at the moment though is it? I thought they were going to transition to 7nm EUV or make that transition at 5nm otherwise.

Their transitioning to EUV next year with their upgraded 7nm logic node which will be known as 7nm+ and it would coincide with the potential release of a new systems during the last quarter of the year ... 

Heck, Samsung is already there with EUV and products launching early next year based on it ... 

Sure but if Ryzen 2 is on 7nm next year, assuming that PS5 will end up on 7nm+ by years end seems like a far stretch. We also don't know what kind of capacity TSMC will have at that time. If they can't guarantee 10's of millions of chips early on in the gen on that process, with high yields, then PS won't use it.

Samsung is a different company with their own foundries so that doesn't mean much in terms of TSMC other than competition. How widely available and expensive are those products though?



Pemalite said:

False. All of it is false.

Pretty impossible to argue with you when you don't engage with the points and present a different perspective. You put forth no concrete examples of why I am wrong. Only vague "Yeah but what about this game" without explaining the situation in enough detail so I can make the same logical conclusion as you.

What you are saying is only the case with theoretical potential, but the reality of game development doesn't function merely in theory, it works in practice. 

I can also prove my position in a different way then. 

Nvidias RTX has fixed function hardware on the silicon and you need to use specialized code for implementing it in your ray tracing pipeline. If you try to use the same code in hardware that doesn't have the fixed function BVH accelerator, then you would have to brute-force it. This would, in turn, result in extremely bad performance.

It is the EXACT same principle with console games.

If you cannot understand this, then I think we are done arguing. 



EricHiggin said:

14TF in 2019 seems like to big of a jump for a console that should sell for around $449 give or take. I could see 14TF if they skimp on the CPU again, but I can't see them going 30FPS for PS5, and I can't imagine they would risk later gen games not being able to hold close to 60, especially with possible large frame drops by then. Making the CPU plenty strong, and making the GPU just good enough, makes more sense to me. 6 or 8 strong acceptably clocked Ryzen cores, with around a 10TF GPU would be good enough for the initial launch model, assuming they plan on making a Pro version again mid gen.

You are looking at it all wrong.

Look at it this way. lets look at what we know. 

Going from 28nm to 16/14 nm sony were able to double the size of their GPU and up it cock by over 10%. Now if we ignore any architectural advancements and just half the fab process again (so similar sized SOC {sightly saller in truth} and twice thenumber of transistors) that means the PS5 will have at the very least 72CUs. And if taking apage from the XB1X, 80CUs. At the very least. And if downcocking the GPU too (like they have always done) but alo running at a higher clock tha what we have in the XB1X ten you are looking at a 80CU GPU running at around 1300Mhz (up about 10% from the XB1X GPU clock).

That right there is already putting you at over 14TF. And all this is if we are basing it off the current console GPU architecture which we shouldn't because navi willl be more efficient than polaris.

As for the CPU, if this gen has thought them anything its that they don't need the worlds strongest CPU. All they need is a better CPU than Jaguar and jaguar is a very very very low bar to scale. They could easily throw in a Ryzen 2 based 6 core 12 thread CPU in that SOC and it will be over 5 times more powerful than jaguar. While running at a clock of around 2.5Ghz to 3GHz. 

And lastly, lets talk about good enough. Whatever the come out the gate with sets the tone for the next 7-8 years. They Pro version isnt a target its just a bonus.

EricHiggin said: 

XB1X has 6.0TF and can run some games in full 4k. Cerny said they feel they need at least 8.0TF for proper 4k. AMD themselves said something like 7.5TF I think. If PS5 has 10TF, that will be enough that they shouldn't have to worry about checkerboarding until later in the gen, which will be good enough if you only spent $399 in 2019, and can upgrade to Pro in say 2022.

No. If you take a "PS4" game (1.8TF GPU) then yes you will need around 8TF (in truth probably a little less cause not everything in the render pipeline scales up with resolution). But a PS5 game will be very different from a PS4 game. Geomtry will be more complex, shaders will be more complex, sadows, lighting........ everything. hats wht makes it "next gen". 

To put his into context. The PS4 GPU was over 9 times more powerful than that PS3 GPU from a flops perspective. Yet it still runs gaes at 30fps and 1080p. But its 1080p/30fps looks a lot better than 1080p/30fps on the PS3.

EricHiggin said: 

It's not the end of the world, but when you have always made a slim console, usually 3 to 4 years in, going without one, or a poor attempt because you can't shrink the power and cooling system, will be a downer. If PS was worried about this, I wouldn't be surprised to see them use 10nm at launch, and then 5nm if it's ready for slim, and if not then 7nm.

There areother ways to cut the cost of building the console besidesjut shrinking the SOC. And again, sony or MS wouldn,t make a console today with the primary driver of being able to srink it in 3 years time because tradition...... They would always make the best console they can make a the price point they are trying to hit. Or at times even make a better console than the price they are selling it for and bite a loss on each unit sold. They wouldn't use 10nm or 12nm if 7nm chips are available. They pay by the wafer. It literally costs them less using a smller node so why would they pay more for a lrger node? So they can get to py less 3 years later instead?

The only reason sony would go for a 12nm or 14nm chip is if for some reason even after 18 months of making them the yeilds on the 7nm chips are so low that it is actually cheaper to go with a more mature fabrication process. Because remember they pay by the wafer. If they can only get 30 working chips from a 100 chip wafer then they are better off going witha more mature node and get 40 working chips from a 50 chip wafer (mind you both wafers are the same size you just get nore chips when using a smaller fab process)

EricHiggin said: 

The future proofing and cost savings make sense on one hand, but no internal upgrading is going to be a massive step for PS. I know personally, I'd rather pay a little more to have the ability to swap internal storage. External mass storage isn't exactly a deal breaker, but I would much rather have internal. Even if the slot was just empty internal space for additional 2.5" storage.

 

You ad me both. But if as you knopw its aout costs. Will it cost sony less to build in 750GB to 1TB of super fast storage into the PS4 tha put in a user upgradeable SSD? Yes. 



 

Areaz32 said:

The advantages have mostly shown itself in the first party space. Media Molecules game, for instance, has completely done away with the rasterization hardware (ROPs) and they are using a cutting-edge SDF solution for their graphics. If they didn't have those ACE units they would never have been able to go with this SDF solution. The Tomorrow Children is also a game that uses cascaded voxel cone tracing. No one else was doing that back in 2014. Here is a GDC PDF from early 2015 

...And?
Remember this on PC?
https://www.youtube.com/watch?v=00gAbgBu8R4

Just wanted to point out that although it doesn't invalidate your whole argument, these are not comparable techniques. Voxel cone tracing is a sort of pseudo ray tracing, and unlimited detail is a voxel search algorithm that maintains a fixed cpu cost based on screen resolution, which translates voxel locations from unlimited numbers of 3d assets to pixel locations on the screen. The only thing they have in common is the use of voxels.

On a side note it is ironic to think of these as the same, because voxel cone tracing produces more realistic lighting, whereas unlimited detail is notorious for its bad lighting.



EricHiggin said:
fatslob-:O said:

Their transitioning to EUV next year with their upgraded 7nm logic node which will be known as 7nm+ and it would coincide with the potential release of a new systems during the last quarter of the year ... 

Heck, Samsung is already there with EUV and products launching early next year based on it ... 

Sure but if Ryzen 2 is on 7nm next year, assuming that PS5 will end up on 7nm+ by years end seems like a far stretch. We also don't know what kind of capacity TSMC will have at that time. If they can't guarantee 10's of millions of chips early on in the gen on that process, with high yields, then PS won't use it.

Samsung is a different company with their own foundries so that doesn't mean much in terms of TSMC other than competition. How widely available and expensive are those products though?

I have no idea why you guys see to think the PS5 is releasing next year. I am going to say 2020 at the earliest. Thereis absolutely no reasn for sony to release next year.