Quantcast
PS5: Leaked Design And Technology [RUMOUR]

Forums - Sony Discussion - PS5: Leaked Design And Technology [RUMOUR]

Areaz32 said:

Also, I am pretty certain that no one expected PS4 to do TLOU2 graphics back in 2012 when they released the 7870.

I actually have a high level understanding of the rendering pipeline of TLOU2. It's nothing I didn't expect.

Areaz32 said:

60fps is a CPU problem, not a GPU problem.

False. It is both a CPU and a GPU problem.
If the GPU doesn't have the capability to output at 60fps, it's not going to output at 60fps.
If the CPU doesn't have the capability to output at 60fps, then the GPU is also not going to be able to push out 60fps.

This is the frustrating thing I find with allot of console gamers who think that framerate is all about the CPU... And Flops is everything. It isn't.

Areaz32 said:

It isn't a conclusion I made through any fallacy. It is an educated suggestion as to how and why it happened. If you were to tell me that Remedy devs are incompetent on the tech side, I would call you a liar. This is the only way to explain why the PC version ran so poorly. The would clearly have fixed it given enough time, as they also did with a lot of the issues afterwards. Most windows store games were planned to be Xbox one only for a long time until Microsoft changed their tune.

Educated? Feel free to provide some citations.
Most Windows Store games run like ass anyway... Due to a myriad of reasons. - They also have a ton of technical limitations as well... Shall I educate you on that? More than happy to oblige... That isn't an issue to do with the PC or PC optimization as it's not an issue that tends to be reproduced on say... Steam.
Nor does it mean the Xbox version is the epitome of optimization prowess.

Areaz32 said:

Yep, it has a low-level API for developers that wish to leverage that level of control, but the problem comes in when you want to port it to the PC platform because then you can't directly port it without significant performance pitfalls. Simply put, some of the hardware intricacies of the xbox and playstation aren't available on ALL PC hardware (keyword here is "intricacies") so sometimes you can't rely on async compute or some specific number of scheduled wavefronts in your PC games, simply because you never know if it actually has all those ACE units, ROPs or ALU's or whatever.

False. All of it is false.

Areaz32 said:

Elaborate on "gobbling up more Ram than a desktop OS"  Most desktop OS's eat up more ram depending on how much ram it has. For instance, I am running 32gb ddr4 and it eats more than 10 gigs on caching etc. It will free that ram up if i actually start using it in a game, or some other application, but there you have it.

Basically, what you are stating is that the PC is making more efficient use of available resources in order to expedite operations.
That is a good thing.
But when it comes to crunch time, the PC will evict that cached memory in order to give priority to the application.

Again... Windows will happily operate on 2GB~ or less of DRAM. - The consoles you have no choice in the matter, they have a fixed amount in reserve that is unchanging.
Which means... Your original claim that PC OS's are more memory intensive than consoles is blatantly false... You are just unwilling to admit you were incorrect on your prior assertion.

I mean... Windows 7 is happy with 1GB of ram if the disk (I.E. SSD) is sufficient enough.


Areaz32 said:

Elaborate on what you mean when you say "more lean". I didn't imply that it is automatically better than what PC offers. What I am arguing is that utilizing different specific hardware in a specialized way is preferred over using random hardware, as random hardware would have to brute-force a ton of random stuff just to get the same result.

The PC isn't "random hardware". - There are standards that are adhered to...
And there are a fixed amount of companies, building the same iterative updated hardware on a yearly cadence... It's a known quantity at this point.

Your claim may have held some water back in the 90's/early 2000's when there was over a half dozen CPU companies and a half dozen GPU companies'.

Areaz32 said:

As an example, emulators are extremely inefficient.

Inefficient? No.
Do you even understand how an emulator even works?

Areaz32 said:

It is a similar principle. The emulation overhead itself can be lightweight but when it comes to executing the code that was built for specific hardware, then it becomes way slow.

Emulators works by taking an instruction from the original hardware and attempting to "reinterpret" it to a different hardware's instruction.
Sometimes the Emulator needs to take that single instruction and chop it up into multiple instructions and translate it to instructions the new hardware can understand for execution.

What that essentially means is that... A single instruction done per clock would take multiple clocks to perform on the different hardware. - That's not because of any special "overheads". - It's just the nature of emulation itself.

Microsoft got around that problem somewhat with the Xbox One emulating Xbox 360 games by taking a multitude of approaches.

For starters... The Xbox One has a few Xbox 360 features already baked into it's hardware... So that's one overhead reduction.
Microsoft also uses the power of virtualization to virtualize the Xbox 360 environment on the Xbox One... And abstracts the Xbox 360 hardware.
Some things is of course emulated, but not as much as you think.

And then of course... Microsoft repackages the games. - In short... It is how Microsoft was able to emulate 3x 3.2Ghz Hyperthreaded PowerPC cores on 6x 1.75ghz Jaguar cores.

Emulation, the typical approach the PC takes is a brute force approach... And would literally be impossible on consoles due to their anemically pathetic CPU's.

However, the Xbox One approach isn't impossible on PC either... In-fact some emulators take that exact approach as the original platform was an open one, not a closed one... Which means that the emulator developers could get the low-level information they need to build their emulators. - In-fact... One company leverages this to a degree on PC to repackage PC games... I will leave it to you to make an "educated" guess.

Areaz32 said:

A laptop I had from 2013 buckled hard in the attempt to render some enemies with a special specular mapping in the later stages of Persona 4 for instance. It dropped to under 10fps and it was a laptop with two GT 650m SLI and an i7. Granted these specs aren't anything special today, but it would still amount to several times the power of a PS2.

See above.

 

Areaz32 said:

I wasn't arguing that PC as a whole cannot achieve these features. I am just saying that developers cannot use these features on PC in any significant way, because they never know if the end user even HAS IT. 

 

Sure they can. And in history there has been many examples where developers have leveraged PC-specific features.
For example... Tessellation is something that only just became prevalent this console generation... However, the PC had that technology during the Playstation 2 era.

It is actually the reason why abstraction exists... To standardize specifications for developers and expose standard functions.

Areaz32 said:

The advantages have mostly shown itself in the first party space. Media Molecules game, for instance, has completely done away with the rasterization hardware (ROPs) and they are using a cutting-edge SDF solution for their graphics. If they didn't have those ACE units they would never have been able to go with this SDF solution. The Tomorrow Children is also a game that uses cascaded voxel cone tracing. No one else was doing that back in 2014. Here is a GDC PDF from early 2015 

...And?
Remember this on PC?
https://www.youtube.com/watch?v=00gAbgBu8R4

Areaz32 said:

The take away here is that, if this game was multiplat, it would most definitely mean that they would be forced to use more traditional technology. Unless they only want to target that very narrow segment of the PC market that had GPU's with those exact features.

No it doesn't.
The PC can do everything the consoles can do... They simply do it better.

Last edited by Pemalite - on 11 November 2018

Around the Network
fatslob-:O said:
EricHiggin said:

TSMC 7nm isn't EUV at the moment though is it? I thought they were going to transition to 7nm EUV or make that transition at 5nm otherwise.

Their transitioning to EUV next year with their upgraded 7nm logic node which will be known as 7nm+ and it would coincide with the potential release of a new systems during the last quarter of the year ... 

Heck, Samsung is already there with EUV and products launching early next year based on it ... 

Sure but if Ryzen 2 is on 7nm next year, assuming that PS5 will end up on 7nm+ by years end seems like a far stretch. We also don't know what kind of capacity TSMC will have at that time. If they can't guarantee 10's of millions of chips early on in the gen on that process, with high yields, then PS won't use it.

Samsung is a different company with their own foundries so that doesn't mean much in terms of TSMC other than competition. How widely available and expensive are those products though?



The Canadian National Anthem According To Justin Trudeau

 

Oh planet Earth! The home of native lands, 
True social law, in all of us demand.
With cattle farts, we view sea rise,
Our North sinking slowly.
From far and snide, oh planet Earth, 
Our healthcare is yours free!
Science save our land, harnessing the breeze,
Oh planet Earth, smoke weed and ferment yeast.
Oh planet Earth, ell gee bee queue and tee.

Pemalite said:

False. All of it is false.

Pretty impossible to argue with you when you don't engage with the points and present a different perspective. You put forth no concrete examples of why I am wrong. Only vague "Yeah but what about this game" without explaining the situation in enough detail so I can make the same logical conclusion as you.

What you are saying is only the case with theoretical potential, but the reality of game development doesn't function merely in theory, it works in practice. 

I can also prove my position in a different way then. 

Nvidias RTX has fixed function hardware on the silicon and you need to use specialized code for implementing it in your ray tracing pipeline. If you try to use the same code in hardware that doesn't have the fixed function BVH accelerator, then you would have to brute-force it. This would, in turn, result in extremely bad performance.

It is the EXACT same principle with console games.

If you cannot understand this, then I think we are done arguing. 



EricHiggin said:

14TF in 2019 seems like to big of a jump for a console that should sell for around $449 give or take. I could see 14TF if they skimp on the CPU again, but I can't see them going 30FPS for PS5, and I can't imagine they would risk later gen games not being able to hold close to 60, especially with possible large frame drops by then. Making the CPU plenty strong, and making the GPU just good enough, makes more sense to me. 6 or 8 strong acceptably clocked Ryzen cores, with around a 10TF GPU would be good enough for the initial launch model, assuming they plan on making a Pro version again mid gen.

You are looking at it all wrong.

Look at it this way. lets look at what we know. 

Going from 28nm to 16/14 nm sony were able to double the size of their GPU and up it cock by over 10%. Now if we ignore any architectural advancements and just half the fab process again (so similar sized SOC {sightly saller in truth} and twice thenumber of transistors) that means the PS5 will have at the very least 72CUs. And if taking apage from the XB1X, 80CUs. At the very least. And if downcocking the GPU too (like they have always done) but alo running at a higher clock tha what we have in the XB1X ten you are looking at a 80CU GPU running at around 1300Mhz (up about 10% from the XB1X GPU clock).

That right there is already putting you at over 14TF. And all this is if we are basing it off the current console GPU architecture which we shouldn't because navi willl be more efficient than polaris.

As for the CPU, if this gen has thought them anything its that they don't need the worlds strongest CPU. All they need is a better CPU than Jaguar and jaguar is a very very very low bar to scale. They could easily throw in a Ryzen 2 based 6 core 12 thread CPU in that SOC and it will be over 5 times more powerful than jaguar. While running at a clock of around 2.5Ghz to 3GHz. 

And lastly, lets talk about good enough. Whatever the come out the gate with sets the tone for the next 7-8 years. They Pro version isnt a target its just a bonus.

EricHiggin said: 

XB1X has 6.0TF and can run some games in full 4k. Cerny said they feel they need at least 8.0TF for proper 4k. AMD themselves said something like 7.5TF I think. If PS5 has 10TF, that will be enough that they shouldn't have to worry about checkerboarding until later in the gen, which will be good enough if you only spent $399 in 2019, and can upgrade to Pro in say 2022.

No. If you take a "PS4" game (1.8TF GPU) then yes you will need around 8TF (in truth probably a little less cause not everything in the render pipeline scales up with resolution). But a PS5 game will be very different from a PS4 game. Geomtry will be more complex, shaders will be more complex, sadows, lighting........ everything. hats wht makes it "next gen". 

To put his into context. The PS4 GPU was over 9 times more powerful than that PS3 GPU from a flops perspective. Yet it still runs gaes at 30fps and 1080p. But its 1080p/30fps looks a lot better than 1080p/30fps on the PS3.

EricHiggin said: 

It's not the end of the world, but when you have always made a slim console, usually 3 to 4 years in, going without one, or a poor attempt because you can't shrink the power and cooling system, will be a downer. If PS was worried about this, I wouldn't be surprised to see them use 10nm at launch, and then 5nm if it's ready for slim, and if not then 7nm.

There areother ways to cut the cost of building the console besidesjut shrinking the SOC. And again, sony or MS wouldn,t make a console today with the primary driver of being able to srink it in 3 years time because tradition...... They would always make the best console they can make a the price point they are trying to hit. Or at times even make a better console than the price they are selling it for and bite a loss on each unit sold. They wouldn't use 10nm or 12nm if 7nm chips are available. They pay by the wafer. It literally costs them less using a smller node so why would they pay more for a lrger node? So they can get to py less 3 years later instead?

The only reason sony would go for a 12nm or 14nm chip is if for some reason even after 18 months of making them the yeilds on the 7nm chips are so low that it is actually cheaper to go with a more mature fabrication process. Because remember they pay by the wafer. If they can only get 30 working chips from a 100 chip wafer then they are better off going witha more mature node and get 40 working chips from a 50 chip wafer (mind you both wafers are the same size you just get nore chips when using a smaller fab process)

EricHiggin said: 

The future proofing and cost savings make sense on one hand, but no internal upgrading is going to be a massive step for PS. I know personally, I'd rather pay a little more to have the ability to swap internal storage. External mass storage isn't exactly a deal breaker, but I would much rather have internal. Even if the slot was just empty internal space for additional 2.5" storage.

 

You ad me both. But if as you knopw its aout costs. Will it cost sony less to build in 750GB to 1TB of super fast storage into the PS4 tha put in a user upgradeable SSD? Yes. 



 

Areaz32 said:

The advantages have mostly shown itself in the first party space. Media Molecules game, for instance, has completely done away with the rasterization hardware (ROPs) and they are using a cutting-edge SDF solution for their graphics. If they didn't have those ACE units they would never have been able to go with this SDF solution. The Tomorrow Children is also a game that uses cascaded voxel cone tracing. No one else was doing that back in 2014. Here is a GDC PDF from early 2015 

...And?
Remember this on PC?
https://www.youtube.com/watch?v=00gAbgBu8R4

Just wanted to point out that although it doesn't invalidate your whole argument, these are not comparable techniques. Voxel cone tracing is a sort of pseudo ray tracing, and unlimited detail is a voxel search algorithm that maintains a fixed cpu cost based on screen resolution, which translates voxel locations from unlimited numbers of 3d assets to pixel locations on the screen. The only thing they have in common is the use of voxels.

On a side note it is ironic to think of these as the same, because voxel cone tracing produces more realistic lighting, whereas unlimited detail is notorious for its bad lighting.



Around the Network
EricHiggin said:
fatslob-:O said:

Their transitioning to EUV next year with their upgraded 7nm logic node which will be known as 7nm+ and it would coincide with the potential release of a new systems during the last quarter of the year ... 

Heck, Samsung is already there with EUV and products launching early next year based on it ... 

Sure but if Ryzen 2 is on 7nm next year, assuming that PS5 will end up on 7nm+ by years end seems like a far stretch. We also don't know what kind of capacity TSMC will have at that time. If they can't guarantee 10's of millions of chips early on in the gen on that process, with high yields, then PS won't use it.

Samsung is a different company with their own foundries so that doesn't mean much in terms of TSMC other than competition. How widely available and expensive are those products though?

I have no idea why you guys see to think the PS5 is releasing next year. I am going to say 2020 at the earliest. Thereis absolutely no reasn for sony to release next year.



EricHiggin said:

Sure but if Ryzen 2 is on 7nm next year, assuming that PS5 will end up on 7nm+ by years end seems like a far stretch. We also don't know what kind of capacity TSMC will have at that time. If they can't guarantee 10's of millions of chips early on in the gen on that process, with high yields, then PS won't use it.

Samsung is a different company with their own foundries so that doesn't mean much in terms of TSMC other than competition. How widely available and expensive are those products though?

Ryzen 2 is already here, it's Ryzen 3 you're talking about that will feature AMD's 2nd generation Zen microachitecture and will be based on next generation logic nodes 7nm/7nm+ ... 

Another likely possibility will be next gen console APUs will likely be based on Ryzen APU's which always release after the pure Ryzen CPU parts so that makes it even more likely that 7nm+ is a potential target. TSMC had enough capacity this year to launch dozen's of millions of new iPhone XS/XR so I'm pretty certain that TSMC would be able to handle the launch of a potential PS5 especially since 7nm+ is a low risk delivery node when TSMC is just upgrading their production lines to use ASML's new EUV scanners ... 

As for Samsung, they aren't just an IDM since they actually do have chip design customers but I think the likelihood of a PS5 launch in 2019 is very slim regardless ... (as for the price, don't be too concerned about it since a short term and long term advantage in cost savings will be observed compared to the last several years where logic foundries had to deal with multiple patterning) 



Areaz32 said:

Pretty impossible to argue with you when you don't engage with the points and present a different perspective.

I would hope I offer a different perspective.
Would be absolutely terrible if everyone just nodded and agreed with everything you present wouldn't it?

Areaz32 said:

You put forth no concrete examples of why I am wrong. Only vague "Yeah but what about this game" without explaining the situation in enough detail so I can make the same logical conclusion as you.

I gave plenty of examples. I.E. Emulation.
Even broke it down and explained why things are the way they are.

Areaz32 said:

What you are saying is only the case with theoretical potential, but the reality of game development doesn't function merely in theory, it works in practice.

False.

Areaz32 said:

I can also prove my position in a different way then.

Nothing has been proven or disproven from either side. You need this thing called "Empirical evidence". - I haven't bothered to flex my muscles and provide any as of yet... But if your desire is for me to substantiate my claims... Then I will request you need to start doing the same.

Areaz32 said:

Nvidias RTX has fixed function hardware on the silicon and you need to use specialized code for implementing it in your ray tracing pipeline. If you try to use the same code in hardware that doesn't have the fixed function BVH accelerator, then you would have to brute-force it. This would, in turn, result in extremely bad performance.

How sure are you of that?
Because that fixed function hardware should be exposed by nVidia's drivers and abstracted via various API's.

Which means... AMD can perform the same function on it's non-fixed function pipelines.

Areaz32 said:

It is the EXACT same principle with console games.

If you cannot understand this, then I think we are done arguing.

If that is the way you feel. Have a nice day.

Intrinsic said:

No. If you take a "PS4" game (1.8TF GPU) then yes you will need around 8TF (in truth probably a little less cause not everything in the render pipeline scales up with resolution). But a PS5 game will be very different from a PS4 game. Geomtry will be more complex, shaders will be more complex, sadows, lighting........ everything. hats wht makes it "next gen". 

To put his into context. The PS4 GPU was over 9 times more powerful than that PS3 GPU from a flops perspective. Yet it still runs gaes at 30fps and 1080p. But its 1080p/30fps looks a lot better than 1080p/30fps on the PS3.

 

No.
And neither is the Playstation 4 running all games at 1080P either.



Pemalite said: 
Areaz32 said:

60fps is a CPU problem, not a GPU problem.

False. It is both a CPU and a GPU problem.
If the GPU doesn't have the capability to output at 60fps, it's not going to output at 60fps.
If the CPU doesn't have the capability to output at 60fps, then the GPU is also not going to be able to push out 60fps.

This is the frustrating thing I find with allot of console gamers who think that framerate is all about the CPU... And Flops is everything. It isn't.

This doesn't make sense for this argument. Strange blanket statement. A CPU only provides a bottleneck in severe cases and there isn't one on the PS4, or the XBO. It's majority of the GPU to produce frames for a video game, 3D pipeline rendering. This isn't the Source Engine which is built by a bunch of idiot monkeys. A CPU never provides 60 frames. A CPU is terrible at rendering 3D pipelines. You clearly haven't any idea why a GPU bottleneck happens. The CPU is responsible for real-time actions, physics, audio, and a few other processes. If the bandwidth can't match that of the GPU, a bottleneck happens and you lose frames that you can actually use. Think of a partially closed dam. All of the sudden the data can't flow fast enough through the dam(CPU) because of a narrow channel. 

Now, 60 FPS is a GPU issue. That simple. This isn't a E8500 running a 1080 Ti.

PS: Flops ARE everything. It gives a good baseline for performance, even outside of similar architecture in comparison. Just not on a 1:1 ratio in that case (per say NVIDIA/RADEON).



Intrinsic said: 
EricHiggin said: 

14TF in 2019 seems like to big of a jump for a console that should sell for around $449 give or take. I could see 14TF if they skimp on the CPU again, but I can't see them going 30FPS for PS5, and I can't imagine they would risk later gen games not being able to hold close to 60, especially with possible large frame drops by then. Making the CPU plenty strong, and making the GPU just good enough, makes more sense to me. 6 or 8 strong acceptably clocked Ryzen cores, with around a 10TF GPU would be good enough for the initial launch model, assuming they plan on making a Pro version again mid gen.

You are looking at it all wrong.

Look at it this way. lets look at what we know. 

Going from 28nm to 16/14 nm sony were able to double the size of their GPU and up it cock by over 10%. Now if we ignore any architectural advancements and just half the fab process again (so similar sized SOC {sightly saller in truth} and twice thenumber of transistors) that means the PS5 will have at the very least 72CUs. And if taking apage from the XB1X, 80CUs. At the very least. And if downcocking the GPU too (like they have always done) but alo running at a higher clock tha what we have in the XB1X ten you are looking at a 80CU GPU running at around 1300Mhz (up about 10% from the XB1X GPU clock).

That right there is already putting you at over 14TF. And all this is if we are basing it off the current console GPU architecture which we shouldn't because navi willl be more efficient than polaris.

As for the CPU, if this gen has thought them anything its that they don't need the worlds strongest CPU. All they need is a better CPU than Jaguar and jaguar is a very very very low bar to scale. They could easily throw in a Ryzen 2 based 6 core 12 thread CPU in that SOC and it will be over 5 times more powerful than jaguar. While running at a clock of around 2.5Ghz to 3GHz. 

And lastly, lets talk about good enough. Whatever the come out the gate with sets the tone for the next 7-8 years. They Pro version isnt a target its just a bonus.

What about before the PS3 to PS4 transition? What about PS2 to PS3 or PS1 to PS2? Could PS4 have been more powerful? Could they have launched at a $499 price point? Could PS4 have been given a larger case and better cooling? Were PS and MS begging AMD to find time to work with them and not price gouge them? It's not as simple as what can be done in terms of just hardware.

Intrinsic said: 
EricHiggin said: 

XB1X has 6.0TF and can run some games in full 4k. Cerny said they feel they need at least 8.0TF for proper 4k. AMD themselves said something like 7.5TF I think. If PS5 has 10TF, that will be enough that they shouldn't have to worry about checkerboarding until later in the gen, which will be good enough if you only spent $399 in 2019, and can upgrade to Pro in say 2022.

No. If you take a "PS4" game (1.8TF GPU) then yes you will need around 8TF (in truth probably a little less cause not everything in the render pipeline scales up with resolution). But a PS5 game will be very different from a PS4 game. Geomtry will be more complex, shaders will be more complex, sadows, lighting........ everything. hats wht makes it "next gen". 

To put his into context. The PS4 GPU was over 9 times more powerful than that PS3 GPU from a flops perspective. Yet it still runs gaes at 30fps and 1080p. But its 1080p/30fps looks a lot better than 1080p/30fps on the PS3.

With Nvidia and their RTX line, what are the odds AMD has no idea that was in the works, and isn't aiming for something similar? Didn't Cerny mention ray tracing being the holy grail or something like that? What if they try to partially implement that and use a fair amount of resources to push that, instead of everything else they could? Would that also fit under next gen? We all know they like their buzz words. 4k, HDR, so why not ray tracing?

Another question would be how much more powerful than 4.2TF or 6TF is really needed to make a worthwhile jump, in comparison to previous gens? If 4.2TF was worthwhile after just 3 years, why assume they would jump to 14TF after another 3? 10TF would be another 2.3X jump.

Intrinsic said: 
EricHiggin said: 

It's not the end of the world, but when you have always made a slim console, usually 3 to 4 years in, going without one, or a poor attempt because you can't shrink the power and cooling system, will be a downer. If PS was worried about this, I wouldn't be surprised to see them use 10nm at launch, and then 5nm if it's ready for slim, and if not then 7nm.

There areother ways to cut the cost of building the console besidesjut shrinking the SOC. And again, sony or MS wouldn,t make a console today with the primary driver of being able to srink it in 3 years time because tradition...... They would always make the best console they can make a the price point they are trying to hit. Or at times even make a better console than the price they are selling it for and bite a loss on each unit sold. They wouldn't use 10nm or 12nm if 7nm chips are available. They pay by the wafer. It literally costs them less using a smller node so why would they pay more for a lrger node? So they can get to py less 3 years later instead?

The only reason sony would go for a 12nm or 14nm chip is if for some reason even after 18 months of making them the yeilds on the 7nm chips are so low that it is actually cheaper to go with a more mature fabrication process. Because remember they pay by the wafer. If they can only get 30 working chips from a 100 chip wafer then they are better off going witha more mature node and get 40 working chips from a 50 chip wafer (mind you both wafers are the same size you just get nore chips when using a smaller fab process)

High yields is quite important for a cheaper high volume product like a console. The larger and more complex, the worse the yields. Making sure the fab can fill the demand that product will have is also as important, whether it be yields or capacity. That last thing PS wants is a PS5 flying off shelves, with people constantly complaining they can't get one. If you forecast 10 million sales, but will only be able to produce 5 million due to the fab, that's a pretty big problem. There are other ways, but the CPU/GPU/APU are the prime factor. It's no coincidence that Pro and slim came out when they shrunk from 28nm to 16nm. Will PS celebrate their 25th anniversary?

Intrinsic said: 
EricHiggin said: 

The future proofing and cost savings make sense on one hand, but no internal upgrading is going to be a massive step for PS. I know personally, I'd rather pay a little more to have the ability to swap internal storage. External mass storage isn't exactly a deal breaker, but I would much rather have internal. Even if the slot was just empty internal space for additional 2.5" storage.

You ad me both. But if as you knopw its aout costs. Will it cost sony less to build in 750GB to 1TB of super fast storage into the PS4 tha put in a user upgradeable SSD? Yes. 

If it were mostly about cost then you could almost guarantee a 2TB HDD. It's also about future proofing like you said earlier, and convenience, etc. Solid state to some degree seems necessary, I don't disagree, but as for the total storage configuration, I wouldn't put my money on anything at this point in time. If I had to guess, I would say they stick with a PS4 like approach, and add some solid state, but that's just me. XB1X was a step in that direction already.

Intrinsic said:
EricHiggin said:

Sure but if Ryzen 2 is on 7nm next year, assuming that PS5 will end up on 7nm+ by years end seems like a far stretch. We also don't know what kind of capacity TSMC will have at that time. If they can't guarantee 10's of millions of chips early on in the gen on that process, with high yields, then PS won't use it.

Samsung is a different company with their own foundries so that doesn't mean much in terms of TSMC other than competition. How widely available and expensive are those products though?

I have no idea why you guys see to think the PS5 is releasing next year. I am going to say 2020 at the earliest. Thereis absolutely no reasn for sony to release next year.

Based on the rumor. Personally I'm 50/50. I see both good and bad to launching either 2019 or 2020.



The Canadian National Anthem According To Justin Trudeau

 

Oh planet Earth! The home of native lands, 
True social law, in all of us demand.
With cattle farts, we view sea rise,
Our North sinking slowly.
From far and snide, oh planet Earth, 
Our healthcare is yours free!
Science save our land, harnessing the breeze,
Oh planet Earth, smoke weed and ferment yeast.
Oh planet Earth, ell gee bee queue and tee.