Quantcast
PS5 Coming at the End of 2020 According to Analyst: High-Spec Hardware for Under $500

Forums - Sony Discussion - PS5 Coming at the End of 2020 According to Analyst: High-Spec Hardware for Under $500

Price, SKUs, specs ?

Only Base Model, $399, 9-10TF GPU, 16GB RAM 18 26.87%
 
Only Base Model, $449, 10-12TF GPU, 16GB RAM 10 14.93%
 
Only Base Model, $499, 12-14TF GPU, 24GB RAM 18 26.87%
 
Base Model $399 and PREMIUM $499 specs Ans3 10 14.93%
 
Base Mod $399 / PREM $549, >14TF 24GB RAM 5 7.46%
 
Base Mod $449 / PREM $599, the absolute Elite 6 8.96%
 
Total:67
CrazyGPU said:
CGI-Quality said:

You don’t need raytracing to have substantially better looking games than current ones and geometry will see a sizeable upgrade. I know firsthand that the ‘nothing to write home about’ talk is bollocks.

The PS5 and next Xbox should (and will) have notably better looking games than any current gen console can muster. Won’t need some massive crazy hardware over what’s currently available for that. 

I´m not saying games are not going to look better, they will of course, what I say is that my opinion is that is not going to be revolutionary. It will be an evolutionary step from what we have now.

I expect most games, especially AAA, to be evolutionary steps of current techniques. Yet I fully expect beginnings of paradigm shift with polygon/voxel hybrid engines, or even fully voxel ones - the inevitable volumetric future, and that is actual revolution down the line.



Around the Network
Pemalite said: 
BraLoD said:

Thanks.

I've been looking and it seems chroma subsampling won't make me lose any quality from 4:4:4 to 4:2:2 (whick should allow 4K60FPS with HDR on, using 2.0 full bandwidth) except for some very minor text edges.

My advice... And this is the advice I have given myself for the last several decades is... Buy the best you can afford today, there is always something better around the corner, so don't waste your time waiting, life is to short.

Yeah, I just don't want to miss any major benefits from next gen, but it looks like I won't.

I saw that advice on some sites when people questioned similar questions too, btw, lol.

Thank you, I appreciate it.



CrazyGPU said:

You don´t do an exact comparison either.

I have done so in the past.

CrazyGPU said:

You don´t know if the tecniques are going to save 20% of bandwith, 40% or only 10%.

Yes I do.

CrazyGPU said:

You don´t know  how AMD will implement it inside the new hardware.

There are only so many ways you can skin a cat, especially as AMD is still fumbling around with Graphics Core Next and not an entirely new Architecture.

CrazyGPU said:

 So you don´t know if the compresion of an uncompressed 512GB/s stream of data can be compressed to 480, 384, or 256 GB/s of data.

Yes I do. AMD and nVidia have the associated whitepapers to backup their implementations... And various outlets have ran compression benchmarks.

CrazyGPU said:

 So even if you take those tecniques into account you are inacurate too. It´s like comparing Nvidia Teraflops to AMD Teraflops. Teraflops can be the same amount, but the Nvidia implementation makes use of those teoretical maximum teraflops much better than AMD in practise now, so you can´t compare different architectures and be accurate. But as you don´t have anything else for a proper comparison, you have to go with something. So we compare with what we have , teraflops, GB/s, and so on. And the comparison is better if we compare similar architectures of the same brand.

False. Your understanding of Teraflops is the issue here.
An AMD Teraflop is identical to an nVidia one... Identical.

A flop represents the theoretrical single-precision floating point performance of a part.
The reason why nVidia's GPU's perform better than an AMD alternative is simple... It's because of all the parts that have nothing to do with FLOPS.

In tasks that leverage AMD's compute strengths, AMD's GPU's will often beat nVidia's, instances such as asynchronous compute is a primary example, although nVidia is bridging the gap there.

CrazyGPU said:

with your numbers, near 0.2 Teraflops PS3 vs a little more than 1.8 Tf PS4 is 9 times more. No way the PS5 will have 9 times the teraflops of PS4.

That is your numbers, never once stated the Playstation 3 was 0.2 Teraflops. - Nor did I say that the Playstation 4 had 9x the Teraflops and nor did I state the Playstation 5 will have 9x the Teraflops either.

CrazyGPU said:

Also considering tecniques or not, the jump from standar ps4, 176GB/s to let say 512 GB/s, equivalent to 800 GB/s uncompressed, just to put a number, is far smaller than going from 22,4 GB/s of PS3 to 176 GB/s of PS4. And there is no way a PS5 will have 8 times more bandwith to feed the processor.

Take note of the resolution a console with 22.4GB/s-25.6GB/s of bandwidth operates at and the one with 176GB/s operates at.

The Playstation 5 will implement Delta Colour Compression.
AMD's Tonga for instance (First gen Delta) increased potential bandwidth by 40%... Which is why the Radeon 285 was able to compete with the Radeon 280 despite a 36.36% decrease in memory bandwidth.

nVidia has been improving Delta Colour Compression for years...
The jump from Kepler to Maxwell was a 25% increase in compression. (Varies from 20-44% depending on patterning.)
And from Maxwell to Pascal it was another 20%.

And nVidia has made more improvements even before/after then.

AMD also implemented Draw Stream Rasterization on Vega (Although not fully functional yet, but with Navi it should be.)
And the Primitive Discard Accelerator was a thing starting with Polaris, which discards polygons that are to small before being rendered.

These are ways that bandwidth and computational capability is conserved.

CrazyGPU said:

So,  the two things that are  really important to improve performance and have a balanced graphic architecture, the calculation (teraflops) and the feeding for that calculation (cache, memory bandwith, theorical or with tecniques), will improve less than they did before, and the improvement will feel less important than before too even if it were the same.

Teraflops is pretty irrelevant, you can have a GPU with less Teraflops beat a GPU with more Teraflops.
I am pretty sure we have had this "debate" in the past and I provided irrefutable evidence to substantiate my position... But more than happily to go down that path again.

CrazyGPU said:

Software is not going to solve that. PS4 performance was always similar to a Radeon HD 7850-7870 on PC and no exclusive programming changed the graphics capability of the console. And if it did for you, it never became a Geforce GTX 1060 because of that.

I never made a claim to the contrary. The Playstation 4 and Xbox One provide an experience I would fully expect from a Radeon 7850/7770 class graphics processor, maybe a little better, but not substantially so.

In saying that, playing a high-end game on high-end hardware is getting to the point of being a generational gap on PC.

CrazyGPU said:

With a 10-12 Teraflops PS5 machine, we would have a 5,4-6,5 improvement in theoretical Teraflops

And real-world flops? ;)

CrazyGPU said:

and with 800 GB/s of uncompressed bandwith (if you consider that the ps4 did not compress anything) the improvement will be 4,5 times.

Doubt we will be seeing 800GB/s of uncompressed bandwidth, 512GB/s is probably a more balanced and cost-effective target.

CrazyGPU said:

So again, you will have 4k, 30 fps. 60 in some games. With PS4 graphics and a little more when devs get used to it, but nothing to write home about.

I would expect better than Playstation 4 graphics, efficiency has come along way since the GCN 1.0 parts... Navi should take it another step farther... I mean, I am not going out and saying Navi is going to usher in a new era of graphics, far from it... It is still Graphics Core Next with all it's limitations.

But it's going to be stupidly large step-up over the Radeon 7850 derived parts in almost every aspect.

CrazyGPU said:

A great CPU, Hard Disk, or anything else is not going to change that. It´s not going to be the Ray tracing beast with new lighting and geometry many of us would wish for.

The CPU is going to be a massive boon... Hard Drive is probably going to be a bit faster, but we are on the cusp of next-generation mechanical disks, which the consoles might not take advantage of initially... Otherwise caching with NAND is a possibility.

And as for Ray Tracing... Games have been implementing Ray Tracing since the 7th console generation with various deferred renderers... We will be continuing down that path next gen, it will be a slow transition to a fully ray-traced world, next-gen will just be another stepping stone.

CrazyGPU said:

I´m not saying games are not going to look better, they will of course, what I say is that my opinion is that is not going to be revolutionary. It will be an evolutionary step from what we have now.

We haven't had a "revolutionary" jump since the start of the 6th gen consoles... It's all been progressive, iterative refinements.
I mean Call of Duty 3 on the Xbox 360 wasn't a massive overhaul over Call of Duty 3 on the Original Xbox.

But the increases in fidelity when you compare the best looking games of each generation is a substantial one.

Halo 4 on the Xbox 360 is a night and day difference to Halo Combat Evolved on the Original Xbox and Halo Infinite on 8th gen hardware (If the Slipspace demo is an example) is a night and day difference over Halo 4... It has a ton more dynamic effects going on.



CrazyGPU said:
CGI-Quality said:

You don’t need raytracing to have substantially better looking games than current ones and geometry will see a sizeable upgrade. I know firsthand that the ‘nothing to write home about’ talk is bollocks.

The PS5 and next Xbox should (and will) have notably better looking games than any current gen console can muster. Won’t need some massive crazy hardware over what’s currently available for that. 

I´m not saying games are not going to look better, they will of course, what I say is that my opinion is that is not going to be revolutionary. It will be an evolutionary step from what we have now.

Graphics are not going to take these baby steps that some expect. They will be evolutionary at the beginning of next gen (though quite noticeable), but with the pipelines about to make some significant moves, they will be revolutionary by the conclusion and beginning of the 10th.



                                                                                                                                            

Pemalite said:
CrazyGPU said:

You don´t do an exact comparison either.

I have done so in the past.

CrazyGPU said:

You don´t know if the tecniques are going to save 20% of bandwith, 40% or only 10%.

Yes I do.

CrazyGPU said:

You don´t know  how AMD will implement it inside the new hardware.

There are only so many ways you can skin a cat, especially as AMD is still fumbling around with Graphics Core Next and not an entirely new Architecture.

CrazyGPU said:

 So you don´t know if the compresion of an uncompressed 512GB/s stream of data can be compressed to 480, 384, or 256 GB/s of data.

Yes I do. AMD and nVidia have the associated whitepapers to backup their implementations... And various outlets have ran compression benchmarks.

CrazyGPU said:

 So even if you take those tecniques into account you are inacurate too. It´s like comparing Nvidia Teraflops to AMD Teraflops. Teraflops can be the same amount, but the Nvidia implementation makes use of those teoretical maximum teraflops much better than AMD in practise now, so you can´t compare different architectures and be accurate. But as you don´t have anything else for a proper comparison, you have to go with something. So we compare with what we have , teraflops, GB/s, and so on. And the comparison is better if we compare similar architectures of the same brand.

False. Your understanding of Teraflops is the issue here.
An AMD Teraflop is identical to an nVidia one... Identical.

A flop represents the theoretrical single-precision floating point performance of a part.
The reason why nVidia's GPU's perform better than an AMD alternative is simple... It's because of all the parts that have nothing to do with FLOPS.

In tasks that leverage AMD's compute strengths, AMD's GPU's will often beat nVidia's, instances such as asynchronous compute is a primary example, although nVidia is bridging the gap there.

CrazyGPU said:

with your numbers, near 0.2 Teraflops PS3 vs a little more than 1.8 Tf PS4 is 9 times more. No way the PS5 will have 9 times the teraflops of PS4.

That is your numbers, never once stated the Playstation 3 was 0.2 Teraflops. - Nor did I say that the Playstation 4 had 9x the Teraflops and nor did I state the Playstation 5 will have 9x the Teraflops either.

CrazyGPU said:

Also considering tecniques or not, the jump from standar ps4, 176GB/s to let say 512 GB/s, equivalent to 800 GB/s uncompressed, just to put a number, is far smaller than going from 22,4 GB/s of PS3 to 176 GB/s of PS4. And there is no way a PS5 will have 8 times more bandwith to feed the processor.

Take note of the resolution a console with 22.4GB/s-25.6GB/s of bandwidth operates at and the one with 176GB/s operates at.

The Playstation 5 will implement Delta Colour Compression.
AMD's Tonga for instance (First gen Delta) increased potential bandwidth by 40%... Which is why the Radeon 285 was able to compete with the Radeon 280 despite a 36.36% decrease in memory bandwidth.

nVidia has been improving Delta Colour Compression for years...
The jump from Kepler to Maxwell was a 25% increase in compression. (Varies from 20-44% depending on patterning.)
And from Maxwell to Pascal it was another 20%.

And nVidia has made more improvements even before/after then.

AMD also implemented Draw Stream Rasterization on Vega (Although not fully functional yet, but with Navi it should be.)
And the Primitive Discard Accelerator was a thing starting with Polaris, which discards polygons that are to small before being rendered.

These are ways that bandwidth and computational capability is conserved.

CrazyGPU said:

So,  the two things that are  really important to improve performance and have a balanced graphic architecture, the calculation (teraflops) and the feeding for that calculation (cache, memory bandwith, theorical or with tecniques), will improve less than they did before, and the improvement will feel less important than before too even if it were the same.

Teraflops is pretty irrelevant, you can have a GPU with less Teraflops beat a GPU with more Teraflops.
I am pretty sure we have had this "debate" in the past and I provided irrefutable evidence to substantiate my position... But more than happily to go down that path again.

CrazyGPU said:

Software is not going to solve that. PS4 performance was always similar to a Radeon HD 7850-7870 on PC and no exclusive programming changed the graphics capability of the console. And if it did for you, it never became a Geforce GTX 1060 because of that.

I never made a claim to the contrary. The Playstation 4 and Xbox One provide an experience I would fully expect from a Radeon 7850/7770 class graphics processor, maybe a little better, but not substantially so.

In saying that, playing a high-end game on high-end hardware is getting to the point of being a generational gap on PC.

CrazyGPU said:

With a 10-12 Teraflops PS5 machine, we would have a 5,4-6,5 improvement in theoretical Teraflops

And real-world flops? ;)

CrazyGPU said:

and with 800 GB/s of uncompressed bandwith (if you consider that the ps4 did not compress anything) the improvement will be 4,5 times.

Doubt we will be seeing 800GB/s of uncompressed bandwidth, 512GB/s is probably a more balanced and cost-effective target.

CrazyGPU said:

So again, you will have 4k, 30 fps. 60 in some games. With PS4 graphics and a little more when devs get used to it, but nothing to write home about.

I would expect better than Playstation 4 graphics, efficiency has come along way since the GCN 1.0 parts... Navi should take it another step farther... I mean, I am not going out and saying Navi is going to usher in a new era of graphics, far from it... It is still Graphics Core Next with all it's limitations.

But it's going to be stupidly large step-up over the Radeon 7850 derived parts in almost every aspect.

CrazyGPU said:

A great CPU, Hard Disk, or anything else is not going to change that. It´s not going to be the Ray tracing beast with new lighting and geometry many of us would wish for.

The CPU is going to be a massive boon... Hard Drive is probably going to be a bit faster, but we are on the cusp of next-generation mechanical disks, which the consoles might not take advantage of initially... Otherwise caching with NAND is a possibility.

And as for Ray Tracing... Games have been implementing Ray Tracing since the 7th console generation with various deferred renderers... We will be continuing down that path next gen, it will be a slow transition to a fully ray-traced world, next-gen will just be another stepping stone.

CrazyGPU said:

I´m not saying games are not going to look better, they will of course, what I say is that my opinion is that is not going to be revolutionary. It will be an evolutionary step from what we have now.

We haven't had a "revolutionary" jump since the start of the 6th gen consoles... It's all been progressive, iterative refinements.
I mean Call of Duty 3 on the Xbox 360 wasn't a massive overhaul over Call of Duty 3 on the Original Xbox.

But the increases in fidelity when you compare the best looking games of each generation is a substantial one.

Halo 4 on the Xbox 360 is a night and day difference to Halo Combat Evolved on the Original Xbox and Halo Infinite on 8th gen hardware (If the Slipspace demo is an example) is a night and day difference over Halo 4... It has a ton more dynamic effects going on.

1- You don´t know how many bandwith the PS5 GPU is going to save using tecniques compared to PS4. The only way you can now that by now is if you work for Sony or AMD. You are guessing. 

2- My understanding of Teraflops is correct. Teraflops are what you said a tera number of floting point operations, and you are right, are identical. Companies make hardware that is capable of x teraflops theoretical, but practically (because of "the other things" as you said), you can´t reach that theoretical teraflops and you end up with a lower effective number. That´s why I said the Nvidia teraflops that your read on paper can´t be compared to AMDs. An Nvidia card that say 8 teraflops is actually faster in practise than an AMD card with 10 with last implementations. But that´s because how the hardware is implemented and taking advantage of. Is like constructing a highway that is capable of transporting 100 cars per minute passing through in a fixed place, but because of the bumps, trafic lights and so on, it´s never full and ends up transporting 50 cars per minute. A road capable of 70 cars per minute without bumps and trafic lights  will be faster. Same with Nvidia and AMD. So I guess it´s pretty much the same you are saying. There are bottlenecks that don´t allow the graphic calculation engine to  reach its peak. But we have to compare anyway with something, specially if it is the same company. The comparison will not be scientific, just a number for having an idea and speculate.

3- yes, It were your numbers, you said PS3 had 192 Gflops , and that is (rounding it) 0.2 Teraflops or 0.192 to be exact. And playstation 4 has 1.84 Tf.  Sony´s numbers and nobody says the contrary on that. 9x. We are not going 9x from PS4 to PS5.

And Resolution, yes a PS3 has a bandwith of 22,4 for an HD machine compared to a 176 GB/s PS4, Full HD machine, that´s doubling the pixels (1 megapixel to 2. PS5 will cuadruple the pixels to 4k (8.3 Mpix) so It should need even more bandwith difference and it´s not going to have it. Tecniques are going to help but the jump will be smaller.

As how effective is going to be a 10-12 Tf machine, how would I know?, I don´t know how it´s going to be fed. I don´t know about the cache amount, The speed of DDR6 memory, the number of schedulers, The ROPS, Texture units, etc. We have to wait to see Navi architecture at least and see how it compare to olders.

4- Compression or not, the jump will be less this gen. And I said 800 of uncompressed bandwith, calculated arround 512 GB/s  compressed bandwith with the new tecniques. So what I meant was arround 512 GB/s but effective as 800 GB/s were at PS4 launch. The jump will be smaller than older jumps. 

5- Well yes, as I said many times, the CPU will be much better and evolution in graphics will continue to be progressive. I don´t agree with 6 gen though. I think from PS2 to PS3 the jump was considerable and more impactfull than from PS3 to PS4.  



Around the Network
CrazyGPU said:

1- You don´t know how many bandwith the PS5 GPU is going to save using tecniques compared to PS4. The only way you can now that by now is if you work for Sony or AMD. You are guessing.

I have a good idea. It is a minimum of 40% due to the improvements that Tonga introduced... Which I pointed towards prior.
Navi is still Graphics Core Next remember, so there isn't likely to be any dramatic movements in terms of architecture.

CrazyGPU said:

2- My understanding of Teraflops is correct. Teraflops are what you said a tera number of floting point operations, and you are right, are identical. Companies make hardware that is capable of x teraflops theoretical, but practically (because of "the other things" as you said), you can´t reach that theoretical teraflops and you end up with a lower effective number. That´s why I said the Nvidia teraflops that your read on paper can´t be compared to AMDs. An Nvidia card that say 8 teraflops is actually faster in practise than an AMD card with 10 with last implementations. But that´s because how the hardware is implemented and taking advantage of. Is like constructing a highway that is capable of transporting 100 cars per minute passing through in a fixed place, but because of the bumps, trafic lights and so on, it´s never full and ends up transporting 50 cars per minute. A road capable of 70 cars per minute without bumps and trafic lights  will be faster. Same with Nvidia and AMD. So I guess it´s pretty much the same you are saying. There are bottlenecks that don´t allow the graphic calculation engine to  reach its peak. But we have to compare anyway with something, specially if it is the same company. The comparison will not be scientific, just a number for having an idea and speculate.

AMD's hardware can almost reach it's theoretical floating point limits, just not in gaming.

But even when comparing AMD's hardware against AMD's hardware, flops is a pretty useless metric.

CrazyGPU said:

3- yes, It were your numbers, you said PS3 had 192 Gflops , and that is (rounding it) 0.2 Teraflops or 0.192 to be exact. And playstation 4 has 1.84 Tf.  Sony´s numbers and nobody says the contrary on that. 9x. We are not going 9x from PS4 to PS5.

You rounding it to 0.2 teraflops makes it your number. Mine is 192Gflop.
I never once mentioned the multiples of performance increases we are going to have.

CrazyGPU said:

And Resolution, yes a PS3 has a bandwith of 22,4 for an HD machine compared to a 176 GB/s PS4, Full HD machine, that´s doubling the pixels (1 megapixel to 2. PS5 will cuadruple the pixels to 4k (8.3 Mpix) so It should need even more bandwith difference and it´s not going to have it. Tecniques are going to help but the jump will be smaller.

720P is 921,600 pixels.
1080P is 2,073,600 pixels.

That is an increase of 2.25x. Not a strict doubling.

The Playstation 4 also doesn't implement Delta Colour Compression, some aspects like alpha effects are pretty bandwidth heavy, hence why resolution and bandwidth don't generally share a linear relationship as they increase.

In short though, 25GB/s-50GB/s tends to be the general ballpark for 720P gaming. - 150GB/s-200GB/s for 1080P gaming (Often you can push it to 1440P too).

512GB/s bandwidth as per Vega 64 (483.8GB/s), Geforce 1080 Ti (484GB/s), Titan X (480GB/s), Titan XP (547.7GB/s), RX 2080Ti (616GB/s) all seem to be capable 4k parts on the PC... And that is on top of dramatic increases in general fidelity too.

But if you take the Xbox One X... It has 326GB/s of bandwidth... But thanks to the ROP/Memory Crossbar mis-match can potentially drop to 256GB/s in real-world scenarios.
But it also implements... You guessed it. Delta Colour Compression which potentially brings it's bandwidth up to 456GB/s... But because it's not pushing High or Ultra-PC settings, means it doesn't need to spend as much fillrate on Alpha effects, so can drive up resolutions instead.

CrazyGPU said:

As how effective is going to be a 10-12 Tf machine, how would I know?, I don´t know how it´s going to be fed. I don´t know about the cache amount, The speed of DDR6 memory, the number of schedulers, The ROPS, Texture units, etc. We have to wait to see Navi architecture at least and see how it compare to olders.

Lets get it out of the way.
Graphics Core Next generally is not compute limited. It is often ROP starved, Geometry Starved, Bandwidth Starved... So might as well ignore the Teraflop issue entirely.

As for the rest... Navi is Graphics Core Next It is part of the same Graphics generation as the Xbox One and Playstation 4... And Polaris's successor and not Vega's.

Thus we can surmise what the architecture is likely to have...
It will not exceed 64 CU's.
It will not exceed 64 ROPS.
It will have a 4:1 TMU to CU count.
It will have 1x Command Processor.
It will have 4x Geometry Processors with a substantial increase in throughput thanks to NGG.
It will be built at 7nm.
It will be backed by moderately clocked GDDR6. (Cost is the factor.)
It will have Delta Colour Compression.
It will have Primitive Shaders.
It will have Draw Stream Binning Rasterization.
It will have Primative Discarding capability.

And I could go on... So whilst we don't have confirmation on the final numbers of what the hardware entails, we can still make a very educated guess on what we can expect.

CrazyGPU said:

4- Compression or not, the jump will be less this gen. And I said 800 of uncompressed bandwith, calculated arround 512 GB/s  compressed bandwith with the new tecniques. So what I meant was arround 512 GB/s but effective as 800 GB/s were at PS4 launch. The jump will be smaller than older jumps.

Well. The PC is doing fine with 500-600GB/s of bandwidth before compression comes into play for 4k gaming... That is on top of dramatic increases in visual fidelity over the consoles.
I think engines will continue to rely on dynamic resolution implementations next gen... And various forms of frame reconstruction to get the best bang-for-buck visual presentation.

CrazyGPU said:

5- Well yes, as I said many times, the CPU will be much better and evolution in graphics will continue to be progressive. I don´t agree with 6 gen though. I think from PS2 to PS3 the jump was considerable and more impactfull than from PS3 to PS4. 

Initially the jump from Playstation 2 to Playstation 3 was pretty basic. Heck, most of the Playstation 3's initial E3 was full of up-rezzed PS2 games essentially.

The jump from Xbox to Xbox 360 was even smaller as the Original Xbox was already pushing out games with full pixel shader effects and some titles were in High Definition.

At the end of the day though, I doubt we will agree about next gen being a substantial jump, efficiency has come a long way since the consoles launched in 2013... And those techniques will come into their own at some point next generation.



Does anyone else think that pushing further graphical fidelity is a waste?

This console generation has taught me more than others that while nice, pixel counting doesn't exactly equate to better games. It actually doesn't push sales like it once did either. So many games today that are the top sellers are not graphical power houses.

I say this as someone who greatly enjoys cinematic story driven games. The likes of Horizon, Uncharted, The Last of Us, God of War, my top games of the Gen. Yet..... I think we have reached a point where further fidelity is only putting your game farther from being able to make a profit. Minecraft, Fortnite, Everything from Nintendo.....these games are huge and didn't need it. This isn't to say every game need to be the same. There are just certain things that probably need to be pushed more with whatever added power the PS5 will bring.

-Performance: this finally needs to be more of a focus. 60fps and steady. If we can achieve this at 4k, so be it but I no longer care about pixel ratio.


-Features: This to the extreme! One of the best advances of this gen is the now standard video/pic streaming capabilities. Far better use of all that extra RAM. I would like to see more quality of life additions such as this. Like the ability to Run programs in duality. Such as PnP Game and Internet Browser at same time. Think the new Samsung phone method. Maybe a Playstation store that is speedy and runs like a dream.

-Return of Game Features: this is the most important. What if we used that extra power to not go full 4K but bring back Quality features such as Split Screen coop?! Before Pixel Wars it was things like this that really stood out. In particular Nintendo is finding much success with this. Maybe the return of NPC AI in traditional multiplayer games, as there blind focus on courting people for multiplayer has them missing those who still want to play their game but alone.

-VR to further progress is a given. I am very interested to see what the PS5 will do to make VR even better. With stronger tech maybe this will mean less demo-like games and we can get full experiences. No more floating hands. Give me the full Mirror's Edge experience in VR. Imagining games like Cyberpunk in VR.....

I just really feel we need a good gen where we don't try to push any more past 4K and just concentrate on making games that can more easily take advantage of the hardware. Akin to the era of PS2 before devs started going broke over graphics.



      

      

      

Greatness Awaits

PSN:Forevercloud (looking for Soul Sacrifice Partners!!!)

Graphics of next gen will be night and day compared to base X1 and PS4, But the most significant jump will be in terms of Gameplay dynamics, AI, physics, system collision, interaction between characters, interaction between characters and environments, better animations, which also has a great graphical impact when you see a game in action/motion. Graphics are not only "higher pixel count", or more geometry, and prettier textures and effects. So many other factors can determine the general spectacle of a game, and it's always the combo CPU/GPU, it can't be only one component, or comparing GPU teraflops.

Question for Pemalite : what about TSMC Wafer-on-Wafer 3D Stacking Technology ? Any chances it can be implemented for future GPU in 2020/2021 ?



”Every great dream begins with a dreamer. Always remember, you have within you the strength, the patience, and the passion to reach for the stars to change the world.”

Harriet Tubman.

BraLoD said:
Pemalite said: 

My advice... And this is the advice I have given myself for the last several decades is... Buy the best you can afford today, there is always something better around the corner, so don't waste your time waiting, life is to short.

Yeah, I just don't want to miss any major benefits from next gen, but it looks like I won't.

I saw that advice on some sites when people questioned similar questions too, btw, lol.

Thank you, I appreciate it.

As it stands I am currently using a 4k HDR tv with HDMI 2.0. Switched from using projectors back in 2017. I will be upgrading to a 75" or up TV around 2020. I would have done that this year but my current setup is good enough for at least another 18 months. 

So I'll be buying my new TV right along with my PS5 if it comes around november 2020. On the plus side I will be able et a 75" - 85" tv for less then that what I would this year. Oh and electronics are generally cheaper around that time f the year.

If you ask me... I would say just wait.... but ifyou dn have a 4k TV now and wat to et in on the 4k bandwagon.... the find the cheapest best 4k TV out there.... preferably from TCL and call it a day. You could get a god set for under $400. 



Intrinsic said:
BraLoD said:

Yeah, I just don't want to miss any major benefits from next gen, but it looks like I won't.

I saw that advice on some sites when people questioned similar questions too, btw, lol.

Thank you, I appreciate it.

As it stands I am currently using a 4k HDR tv with HDMI 2.0. Switched from using projectors back in 2017. I will be upgrading to a 75" or up TV around 2020. I would have done that this year but my current setup is good enough for at least another 18 months. 

So I'll be buying my new TV right along with my PS5 if it comes around november 2020. On the plus side I will be able et a 75" - 85" tv for less then that what I would this year. Oh and electronics are generally cheaper around that time f the year.

If you ask me... I would say just wait.... but ifyou dn have a 4k TV now and wat to et in on the 4k bandwagon.... the find the cheapest best 4k TV out there.... preferably from TCL and call it a day. You could get a god set for under $400. 

Actually they get more expensive here during Black Friday and Christmas time, really, lol.

I could get a cheaper 4K TV but lose all the benefits from its best attribute, the HDR.

What good is getting a 4K TV with very little brightness and edge-lit? I lose all the contrast from the HDR and get only the resolution bump.

My fear with getting a TV now is that it won't have HDMI 2.1 and that's about it.

As it stands the TV I want has 2.0 full bandwidth with Dolby Vision, which means I only lose a bit of the new techs that will be available with 2.1 massive capacity to carry data. I lose VRR, 4K60fps 10-bit HDR with 4:4:4 chroma and truly Dynamic HDR... but looking closely I lose basically nothing.

If I wait for 2020 models I'll likely pay WAY more for it, and keep without a TV for 2 and a half years (no TV since September 2018, no way I can afford those before March 2021).

Here in Brazil new tech is ridiculously expensive, this TV I want is one of the few exception, as it's actually crazy expensive (BRL 6000/USD1600) it's going for sales from a lot less now (found it for BRL 3800/USD1000), I don't even want to tell you how expensive the bigger models or the other options are (this is for 55").

IF Sony had already released 2.1 on their TVs for this range this year I could wait, but 2020 screens mean 2021 to me to be able to buy (at the very least) and likely the phasing out of the 55" models, which means too big for my room...

I really wanted this TV to have 2.1 ports, that would be perfect, but sadly waiting more can be worse for me.