By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - Prediction: PS5 will have the longest lifecicle of any PS console

Pemalite said:

 

CrazyGPU said:

There are no many shrinks left and each of them are harder and more expensive.

Ironically... NAND/DRAM has side stepped this issue for the time being, you should look at what they are doing.
Some NAND manufacturers even started producing their NAND at 55nm rather than say... 14nm. And the chips were smaller and offered more capacity.

CrazyGPU said:

They need new materials, they are studying silicon replacement, they are getting near molecular size, they have more quantum mecanic problems, machinery in fabrics are becoming exponentially expensive. They should get arround it but it will take more time than ever to make a justifiable PS6.

Or. If you can't go smaller... You go with bigger geometry sizes and you go taller like NAND/DRAM.

AMD took note of how die shrinks are starting to stall out... And took advantage of that with Ryzen. So instead of making one giant monolithic chip to rule them all... They took smaller chips that are cheaper to manufacture and stitched them together... And because the chips are smaller, they get more workable chips per wafer.

What you are stating isn't intrinsically wrong, but there are tons of ways to get around the problem that manufacturers are looking at.

I didn´t say they are not going to do it or  get arround the problem going vertical or with different elements or whatever. Im just saying that it will be expensive and that it will take time. Look at Bandwith for example. How old is HBM? HBM2 is great, but it is expensive, hard to find and implemented only in a few cards. New way of doing stuff takes time and money. That is another reason why PS5 will have a long lifecicle. 



Around the Network
Shadow1980 said:

As with every other system before it, the PS5 will last as long as sales remain healthy. The console cycle is absolutely predicated on sales, not tech or other factors. Sony can attempt to extend the generation through strategic pricing, spacing out price cuts as long as possible and making them relatively small, but sooner or later it will peak and then enter a period of terminal decline, and once it starts to decline, it will be replaced in about 2 or 3 years. Realistically, the PS5 will last at most 7 or 8 years unless Sony can find a way to stretch things out as long as possible.

Do you remember what was the answer of shu yoshida, president of Sony Worldwide Studios, when asked about PS5 for the 1st time. He said that the question was not when but if. Now we know that is coming, I expect PS5, PS5 pro, PS5 vitaminazed, They can make 2 PS5 upgrades or more if they need more sales before launching PS6 and again, making it last 8-9 years, the question would we if there will be a PS6. I think it will but it will take time. 



CrazyGPU said:

Ok, I´ll go again with the teraflops thing.

I understand that there are other things besides teraflops in the graphic pipeline. In a GPU you have many cores, you have decoders, buffers, the execution units, texture units etc. Execution units can be 16 bit, 32 bit, 64bit , SIMD or other. Then you have to feed the processor, You have different levels of cache, then the bus bandwith with memory, the type of memory, the frequency of it, ROPS and so on. It´s complex and they try to balance the jerarchy to feed the processor. The processor makes 32 bit fp operations and we name that a flop.

I know exactly what a flop is.

However... You are ignoring Half Precision, Quarter Precision, Double Precision floating point, 8-bit integer, 16-bit integer... List goes on and on by using regular plain-jane flops.

And you still haven't been able to explain how it relates to the resolution a game is being rendered at, you have only explained what a flop is.

CrazyGPU said:
It´s not precise for comparing graphic card performance, and worse if you want to compare different brands and architectures, BUT GIVES YOU AN IDEA. And speaking about the same architecture, AMD in this case, we can think that a 11-13 teraflop AMD Graphic card would be able to run 4k at 30 fps.

It is only accurate if all other things are equal. Everything.
But because that almost never happens... It's a useless denominator.

You can still take one 2.7 Teraflop AMD GPU and compare it to a 1.7 Teraflop AMD GPU and that 1.7 Teraflop GPU will win, despite having almost a Teraflop deficit in single precision floating point.

CrazyGPU said:
He showed a slide on DICE 2012 session about computational analysis for predicting what was needed for next gen consoles. What did he use for that? Teraflops.  

 

Obviously dumbed down for the less tech literate of course. And it happens often in the video gaming world. (I.E. Like the oft-used statement where higher resolutions somehow equate to higher development costs for video games, which is false.)

I have just provided an example prior that basically undermines your position by using the PS2 > PS4 example.

CrazyGPU said:
And next gen (PS4) didn´t get there, and many games didn´t run at 1080p 30 fps. He predicted it in 2011.

And if the Playstation 4 only had 40GB/s memory bus it would have been a 720P machine.

Funny how that works, huh?


CrazyGPU said:
Older graphic cards didn´t reflect shadows or light, neither did transformation and lighting.  So you had 6 ops per pixel in the 1st case.

Older graphics processors did support reflections... Such as Cube Environment Mapping/Environment Mapping. It's been a staple for decades.

Transform and Lighting has been a graphics feature for decades, heck even the Nintendo 64 had a T&L engine.
Pretty much every Direct 7 (Every single Geforce/Radeon card every made in the history of graphics except for ATI's RV100 chip) has support for transformation and lighting.

The difference between then and now is that... Back then it was all done on fixed function hardware.

CrazyGPU said:
That times 30 and the resolution , you need 2.5 Teraflops for native 1080p. He is not even talking about GPUs or other stuff. Just Tflops.

Here is the thing though. Flops is a theoretical number often unachievable in real world scenarios.


CrazyGPU said:
With his formula and keeping 3 bounces of lights, for 4k you would need. 3840 x 2160 x 30 fps x 40.000 = 10 Teraflops.

And yet. Despite that, game engines have gotten more efficient.
Having a tiled based deferred renderer has helped tremendously... Frostbite even implemented light culling, meaning it was able to have more lights with bounce than ever before.
http://www.dice.se/wp-content/uploads/2014/12/GDC11_DX11inBF3_Public.pdf

CrazyGPU said:
Now, do you want a leap from that? 4 bounces of light? real global illumination. Real Next Gen? It won´t happen with PS5. 3 years is nothing.

Global illumination is already a thing.
And there are different types of Global Illumination such as Voxel Cone Global Illumination.
Unreal Engine 4 supports Sparse Voxel Octree Global Illumination for instance. (SVOGI)

http://www.icare3d.org/research-cat/publications/interactive-indirect-illumination-using-voxel-cone-tracing.html
https://www.geforce.com/whats-new/articles/stunning-videos-show-unreal-engine-4s-next-gen-gtx-680-powered-real-time-graphics



CrazyGPU said:
PD:  Now, If you don´t agree and think that Tim Sweeney aproximation is completely wrong, I have nothing else to say to you.

If you think a video from the last console generation is representative of hardware and the technology we have today, I have nothing else to say to you.



--::{PC Gaming Master Race}::--

Get out much? I didn’t read past teraflops. 

I think you’ll find it’s not all about graphical prowess (Nintendo switch)? 

I’d rather play on my Switch than anything at the moment. I’m not obsessed with graphics and I never have or never will be

so what if it’s 4k? Character models are still the same, the environments like trees etc never change, it’s just higher resolution. The switch is MORE than adequate to satisfy my needs. 



Pemalite said:

 

Or. If you can't go smaller... You go with bigger geometry sizes and you go taller like NAND/DRAM.

AMD took note of how die shrinks are starting to stall out... And took advantage of that with Ryzen. So instead of making one giant monolithic chip to rule them all... They took smaller chips that are cheaper to manufacture and stitched them together... And because the chips are smaller, they get more workable chips per wafer.

What you are stating isn't intrinsically wrong, but there are tons of ways to get around the problem that manufacturers are looking at.

Yes cause Ryzen is just facerolling intel Cpus when it comes to gaming. Nvidia gpus performance gains are 50-60% every second year.

Vega 64 had 28% improvement in performance compared to fury x and was released 3 years later. The technology improvement is going so fast I'm completely exhausted from all the gains.

No, Moores law is dead, it died 2012 and one-trick ponies will do nothing as you suggest.



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Around the Network
LipeJJ said:
Kristof81 said:

It will be, simply because by that time (2025-2030), there won't be any new traditional consoles, just subscription based streaming services. PS5 very likely won't have a physical disc drive, and it will be used as default device for Playstation Now until Sony decides to go multiplatform way.

So you think PS5 will come out only in 2025 and that it will feature only digital games/streaming when physical is still the strongest form factor in 2018?

Sorry If I wasn't clear. 2025-30 would be an approximate release window of PS6, but it won't happen. I believe that PS5 will be the very last, traditional Playstation console, which lifespan might be extended by exclusive streaming services. 



Trumpstyle said:

Yes cause Ryzen is just facerolling intel Cpus when it comes to gaming.

Intel has a fabrication process advantage. On top of Billions of extra R&D.
I think AMD has done fucking well all things considered.

Trumpstyle said:

Vega 64 had 28% improvement in performance compared to fury x and was released 3 years later. The technology improvement is going so fast I'm completely exhausted from all the gains.

Vega doesn't have all it's functionality enabled.
Draw Stream Binning and Primitive shaders still aren't a thing.

That's efficiency AMD is loosing out on.
In regards to Primitive shaders though... AMD has cancelled that outright for implementation within their driver framework.
Rather they are building a separate API that developers if they so choose, can implement it. (Not going to happen to any great degree.)

AMD simply doesn't have the resources of either nVidia or Intel, not in regards to profit, cash flow, industry connections, OEM contracts, R&D and so on.
Yet is trying to do the job of both.

Vega 64 could have been a far better GPU than it could have. And hopefully AMD implements Draw Stream Binning sooner rather than later as it sorely requires it, if not... Navi is likely the GPU to bring it to the forefront.

With that said however, Vega is actually pretty efficient at lower voltages and clocks, AMD drove the GPU to hard which blew out efficiency.




--::{PC Gaming Master Race}::--