By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - PS5 Coming at the End of 2020 According to Analyst: High-Spec Hardware for Under $500

 

Price, SKUs, specs ?

Only Base Model, $399, 9-10TF GPU, 16GB RAM 24 30.00%
 
Only Base Model, $449, 10-12TF GPU, 16GB RAM 13 16.25%
 
Only Base Model, $499, 12-14TF GPU, 24GB RAM 21 26.25%
 
Base Model $399 and PREMIUM $499 specs Ans3 10 12.50%
 
Base Mod $399 / PREM $549, >14TF 24GB RAM 5 6.25%
 
Base Mod $449 / PREM $599, the absolute Elite 7 8.75%
 
Total:80
CrazyGPU said:
CGI-Quality said:

You don’t need raytracing to have substantially better looking games than current ones and geometry will see a sizeable upgrade. I know firsthand that the ‘nothing to write home about’ talk is bollocks.

The PS5 and next Xbox should (and will) have notably better looking games than any current gen console can muster. Won’t need some massive crazy hardware over what’s currently available for that. 

I´m not saying games are not going to look better, they will of course, what I say is that my opinion is that is not going to be revolutionary. It will be an evolutionary step from what we have now.

I expect most games, especially AAA, to be evolutionary steps of current techniques. Yet I fully expect beginnings of paradigm shift with polygon/voxel hybrid engines, or even fully voxel ones - the inevitable volumetric future, and that is actual revolution down the line.



Around the Network
CrazyGPU said:

You don´t do an exact comparison either.

I have done so in the past.

CrazyGPU said:

You don´t know if the tecniques are going to save 20% of bandwith, 40% or only 10%.

Yes I do.

CrazyGPU said:

You don´t know  how AMD will implement it inside the new hardware.

There are only so many ways you can skin a cat, especially as AMD is still fumbling around with Graphics Core Next and not an entirely new Architecture.

CrazyGPU said:

 So you don´t know if the compresion of an uncompressed 512GB/s stream of data can be compressed to 480, 384, or 256 GB/s of data.

Yes I do. AMD and nVidia have the associated whitepapers to backup their implementations... And various outlets have ran compression benchmarks.

CrazyGPU said:

 So even if you take those tecniques into account you are inacurate too. It´s like comparing Nvidia Teraflops to AMD Teraflops. Teraflops can be the same amount, but the Nvidia implementation makes use of those teoretical maximum teraflops much better than AMD in practise now, so you can´t compare different architectures and be accurate. But as you don´t have anything else for a proper comparison, you have to go with something. So we compare with what we have , teraflops, GB/s, and so on. And the comparison is better if we compare similar architectures of the same brand.

False. Your understanding of Teraflops is the issue here.
An AMD Teraflop is identical to an nVidia one... Identical.

A flop represents the theoretrical single-precision floating point performance of a part.
The reason why nVidia's GPU's perform better than an AMD alternative is simple... It's because of all the parts that have nothing to do with FLOPS.

In tasks that leverage AMD's compute strengths, AMD's GPU's will often beat nVidia's, instances such as asynchronous compute is a primary example, although nVidia is bridging the gap there.

CrazyGPU said:

with your numbers, near 0.2 Teraflops PS3 vs a little more than 1.8 Tf PS4 is 9 times more. No way the PS5 will have 9 times the teraflops of PS4.

That is your numbers, never once stated the Playstation 3 was 0.2 Teraflops. - Nor did I say that the Playstation 4 had 9x the Teraflops and nor did I state the Playstation 5 will have 9x the Teraflops either.

CrazyGPU said:

Also considering tecniques or not, the jump from standar ps4, 176GB/s to let say 512 GB/s, equivalent to 800 GB/s uncompressed, just to put a number, is far smaller than going from 22,4 GB/s of PS3 to 176 GB/s of PS4. And there is no way a PS5 will have 8 times more bandwith to feed the processor.

Take note of the resolution a console with 22.4GB/s-25.6GB/s of bandwidth operates at and the one with 176GB/s operates at.

The Playstation 5 will implement Delta Colour Compression.
AMD's Tonga for instance (First gen Delta) increased potential bandwidth by 40%... Which is why the Radeon 285 was able to compete with the Radeon 280 despite a 36.36% decrease in memory bandwidth.

nVidia has been improving Delta Colour Compression for years...
The jump from Kepler to Maxwell was a 25% increase in compression. (Varies from 20-44% depending on patterning.)
And from Maxwell to Pascal it was another 20%.

And nVidia has made more improvements even before/after then.

AMD also implemented Draw Stream Rasterization on Vega (Although not fully functional yet, but with Navi it should be.)
And the Primitive Discard Accelerator was a thing starting with Polaris, which discards polygons that are to small before being rendered.

These are ways that bandwidth and computational capability is conserved.

CrazyGPU said:

So,  the two things that are  really important to improve performance and have a balanced graphic architecture, the calculation (teraflops) and the feeding for that calculation (cache, memory bandwith, theorical or with tecniques), will improve less than they did before, and the improvement will feel less important than before too even if it were the same.

Teraflops is pretty irrelevant, you can have a GPU with less Teraflops beat a GPU with more Teraflops.
I am pretty sure we have had this "debate" in the past and I provided irrefutable evidence to substantiate my position... But more than happily to go down that path again.

CrazyGPU said:

Software is not going to solve that. PS4 performance was always similar to a Radeon HD 7850-7870 on PC and no exclusive programming changed the graphics capability of the console. And if it did for you, it never became a Geforce GTX 1060 because of that.

I never made a claim to the contrary. The Playstation 4 and Xbox One provide an experience I would fully expect from a Radeon 7850/7770 class graphics processor, maybe a little better, but not substantially so.

In saying that, playing a high-end game on high-end hardware is getting to the point of being a generational gap on PC.

CrazyGPU said:

With a 10-12 Teraflops PS5 machine, we would have a 5,4-6,5 improvement in theoretical Teraflops

And real-world flops? ;)

CrazyGPU said:

and with 800 GB/s of uncompressed bandwith (if you consider that the ps4 did not compress anything) the improvement will be 4,5 times.

Doubt we will be seeing 800GB/s of uncompressed bandwidth, 512GB/s is probably a more balanced and cost-effective target.

CrazyGPU said:

So again, you will have 4k, 30 fps. 60 in some games. With PS4 graphics and a little more when devs get used to it, but nothing to write home about.

I would expect better than Playstation 4 graphics, efficiency has come along way since the GCN 1.0 parts... Navi should take it another step farther... I mean, I am not going out and saying Navi is going to usher in a new era of graphics, far from it... It is still Graphics Core Next with all it's limitations.

But it's going to be stupidly large step-up over the Radeon 7850 derived parts in almost every aspect.

CrazyGPU said:

A great CPU, Hard Disk, or anything else is not going to change that. It´s not going to be the Ray tracing beast with new lighting and geometry many of us would wish for.

The CPU is going to be a massive boon... Hard Drive is probably going to be a bit faster, but we are on the cusp of next-generation mechanical disks, which the consoles might not take advantage of initially... Otherwise caching with NAND is a possibility.

And as for Ray Tracing... Games have been implementing Ray Tracing since the 7th console generation with various deferred renderers... We will be continuing down that path next gen, it will be a slow transition to a fully ray-traced world, next-gen will just be another stepping stone.

CrazyGPU said:

I´m not saying games are not going to look better, they will of course, what I say is that my opinion is that is not going to be revolutionary. It will be an evolutionary step from what we have now.

We haven't had a "revolutionary" jump since the start of the 6th gen consoles... It's all been progressive, iterative refinements.
I mean Call of Duty 3 on the Xbox 360 wasn't a massive overhaul over Call of Duty 3 on the Original Xbox.

But the increases in fidelity when you compare the best looking games of each generation is a substantial one.

Halo 4 on the Xbox 360 is a night and day difference to Halo Combat Evolved on the Original Xbox and Halo Infinite on 8th gen hardware (If the Slipspace demo is an example) is a night and day difference over Halo 4... It has a ton more dynamic effects going on.



--::{PC Gaming Master Race}::--

Pemalite said:
CrazyGPU said:

You don´t do an exact comparison either.

I have done so in the past.

CrazyGPU said:

You don´t know if the tecniques are going to save 20% of bandwith, 40% or only 10%.

Yes I do.

CrazyGPU said:

You don´t know  how AMD will implement it inside the new hardware.

There are only so many ways you can skin a cat, especially as AMD is still fumbling around with Graphics Core Next and not an entirely new Architecture.

CrazyGPU said:

 So you don´t know if the compresion of an uncompressed 512GB/s stream of data can be compressed to 480, 384, or 256 GB/s of data.

Yes I do. AMD and nVidia have the associated whitepapers to backup their implementations... And various outlets have ran compression benchmarks.

CrazyGPU said:

 So even if you take those tecniques into account you are inacurate too. It´s like comparing Nvidia Teraflops to AMD Teraflops. Teraflops can be the same amount, but the Nvidia implementation makes use of those teoretical maximum teraflops much better than AMD in practise now, so you can´t compare different architectures and be accurate. But as you don´t have anything else for a proper comparison, you have to go with something. So we compare with what we have , teraflops, GB/s, and so on. And the comparison is better if we compare similar architectures of the same brand.

False. Your understanding of Teraflops is the issue here.
An AMD Teraflop is identical to an nVidia one... Identical.

A flop represents the theoretrical single-precision floating point performance of a part.
The reason why nVidia's GPU's perform better than an AMD alternative is simple... It's because of all the parts that have nothing to do with FLOPS.

In tasks that leverage AMD's compute strengths, AMD's GPU's will often beat nVidia's, instances such as asynchronous compute is a primary example, although nVidia is bridging the gap there.

CrazyGPU said:

with your numbers, near 0.2 Teraflops PS3 vs a little more than 1.8 Tf PS4 is 9 times more. No way the PS5 will have 9 times the teraflops of PS4.

That is your numbers, never once stated the Playstation 3 was 0.2 Teraflops. - Nor did I say that the Playstation 4 had 9x the Teraflops and nor did I state the Playstation 5 will have 9x the Teraflops either.

CrazyGPU said:

Also considering tecniques or not, the jump from standar ps4, 176GB/s to let say 512 GB/s, equivalent to 800 GB/s uncompressed, just to put a number, is far smaller than going from 22,4 GB/s of PS3 to 176 GB/s of PS4. And there is no way a PS5 will have 8 times more bandwith to feed the processor.

Take note of the resolution a console with 22.4GB/s-25.6GB/s of bandwidth operates at and the one with 176GB/s operates at.

The Playstation 5 will implement Delta Colour Compression.
AMD's Tonga for instance (First gen Delta) increased potential bandwidth by 40%... Which is why the Radeon 285 was able to compete with the Radeon 280 despite a 36.36% decrease in memory bandwidth.

nVidia has been improving Delta Colour Compression for years...
The jump from Kepler to Maxwell was a 25% increase in compression. (Varies from 20-44% depending on patterning.)
And from Maxwell to Pascal it was another 20%.

And nVidia has made more improvements even before/after then.

AMD also implemented Draw Stream Rasterization on Vega (Although not fully functional yet, but with Navi it should be.)
And the Primitive Discard Accelerator was a thing starting with Polaris, which discards polygons that are to small before being rendered.

These are ways that bandwidth and computational capability is conserved.

CrazyGPU said:

So,  the two things that are  really important to improve performance and have a balanced graphic architecture, the calculation (teraflops) and the feeding for that calculation (cache, memory bandwith, theorical or with tecniques), will improve less than they did before, and the improvement will feel less important than before too even if it were the same.

Teraflops is pretty irrelevant, you can have a GPU with less Teraflops beat a GPU with more Teraflops.
I am pretty sure we have had this "debate" in the past and I provided irrefutable evidence to substantiate my position... But more than happily to go down that path again.

CrazyGPU said:

Software is not going to solve that. PS4 performance was always similar to a Radeon HD 7850-7870 on PC and no exclusive programming changed the graphics capability of the console. And if it did for you, it never became a Geforce GTX 1060 because of that.

I never made a claim to the contrary. The Playstation 4 and Xbox One provide an experience I would fully expect from a Radeon 7850/7770 class graphics processor, maybe a little better, but not substantially so.

In saying that, playing a high-end game on high-end hardware is getting to the point of being a generational gap on PC.

CrazyGPU said:

With a 10-12 Teraflops PS5 machine, we would have a 5,4-6,5 improvement in theoretical Teraflops

And real-world flops? ;)

CrazyGPU said:

and with 800 GB/s of uncompressed bandwith (if you consider that the ps4 did not compress anything) the improvement will be 4,5 times.

Doubt we will be seeing 800GB/s of uncompressed bandwidth, 512GB/s is probably a more balanced and cost-effective target.

CrazyGPU said:

So again, you will have 4k, 30 fps. 60 in some games. With PS4 graphics and a little more when devs get used to it, but nothing to write home about.

I would expect better than Playstation 4 graphics, efficiency has come along way since the GCN 1.0 parts... Navi should take it another step farther... I mean, I am not going out and saying Navi is going to usher in a new era of graphics, far from it... It is still Graphics Core Next with all it's limitations.

But it's going to be stupidly large step-up over the Radeon 7850 derived parts in almost every aspect.

CrazyGPU said:

A great CPU, Hard Disk, or anything else is not going to change that. It´s not going to be the Ray tracing beast with new lighting and geometry many of us would wish for.

The CPU is going to be a massive boon... Hard Drive is probably going to be a bit faster, but we are on the cusp of next-generation mechanical disks, which the consoles might not take advantage of initially... Otherwise caching with NAND is a possibility.

And as for Ray Tracing... Games have been implementing Ray Tracing since the 7th console generation with various deferred renderers... We will be continuing down that path next gen, it will be a slow transition to a fully ray-traced world, next-gen will just be another stepping stone.

CrazyGPU said:

I´m not saying games are not going to look better, they will of course, what I say is that my opinion is that is not going to be revolutionary. It will be an evolutionary step from what we have now.

We haven't had a "revolutionary" jump since the start of the 6th gen consoles... It's all been progressive, iterative refinements.
I mean Call of Duty 3 on the Xbox 360 wasn't a massive overhaul over Call of Duty 3 on the Original Xbox.

But the increases in fidelity when you compare the best looking games of each generation is a substantial one.

Halo 4 on the Xbox 360 is a night and day difference to Halo Combat Evolved on the Original Xbox and Halo Infinite on 8th gen hardware (If the Slipspace demo is an example) is a night and day difference over Halo 4... It has a ton more dynamic effects going on.

1- You don´t know how many bandwith the PS5 GPU is going to save using tecniques compared to PS4. The only way you can now that by now is if you work for Sony or AMD. You are guessing. 

2- My understanding of Teraflops is correct. Teraflops are what you said a tera number of floting point operations, and you are right, are identical. Companies make hardware that is capable of x teraflops theoretical, but practically (because of "the other things" as you said), you can´t reach that theoretical teraflops and you end up with a lower effective number. That´s why I said the Nvidia teraflops that your read on paper can´t be compared to AMDs. An Nvidia card that say 8 teraflops is actually faster in practise than an AMD card with 10 with last implementations. But that´s because how the hardware is implemented and taking advantage of. Is like constructing a highway that is capable of transporting 100 cars per minute passing through in a fixed place, but because of the bumps, trafic lights and so on, it´s never full and ends up transporting 50 cars per minute. A road capable of 70 cars per minute without bumps and trafic lights  will be faster. Same with Nvidia and AMD. So I guess it´s pretty much the same you are saying. There are bottlenecks that don´t allow the graphic calculation engine to  reach its peak. But we have to compare anyway with something, specially if it is the same company. The comparison will not be scientific, just a number for having an idea and speculate.

3- yes, It were your numbers, you said PS3 had 192 Gflops , and that is (rounding it) 0.2 Teraflops or 0.192 to be exact. And playstation 4 has 1.84 Tf.  Sony´s numbers and nobody says the contrary on that. 9x. We are not going 9x from PS4 to PS5.

And Resolution, yes a PS3 has a bandwith of 22,4 for an HD machine compared to a 176 GB/s PS4, Full HD machine, that´s doubling the pixels (1 megapixel to 2. PS5 will cuadruple the pixels to 4k (8.3 Mpix) so It should need even more bandwith difference and it´s not going to have it. Tecniques are going to help but the jump will be smaller.

As how effective is going to be a 10-12 Tf machine, how would I know?, I don´t know how it´s going to be fed. I don´t know about the cache amount, The speed of DDR6 memory, the number of schedulers, The ROPS, Texture units, etc. We have to wait to see Navi architecture at least and see how it compare to olders.

4- Compression or not, the jump will be less this gen. And I said 800 of uncompressed bandwith, calculated arround 512 GB/s  compressed bandwith with the new tecniques. So what I meant was arround 512 GB/s but effective as 800 GB/s were at PS4 launch. The jump will be smaller than older jumps. 

5- Well yes, as I said many times, the CPU will be much better and evolution in graphics will continue to be progressive. I don´t agree with 6 gen though. I think from PS2 to PS3 the jump was considerable and more impactfull than from PS3 to PS4.  



CrazyGPU said:

1- You don´t know how many bandwith the PS5 GPU is going to save using tecniques compared to PS4. The only way you can now that by now is if you work for Sony or AMD. You are guessing.

I have a good idea. It is a minimum of 40% due to the improvements that Tonga introduced... Which I pointed towards prior.
Navi is still Graphics Core Next remember, so there isn't likely to be any dramatic movements in terms of architecture.

CrazyGPU said:

2- My understanding of Teraflops is correct. Teraflops are what you said a tera number of floting point operations, and you are right, are identical. Companies make hardware that is capable of x teraflops theoretical, but practically (because of "the other things" as you said), you can´t reach that theoretical teraflops and you end up with a lower effective number. That´s why I said the Nvidia teraflops that your read on paper can´t be compared to AMDs. An Nvidia card that say 8 teraflops is actually faster in practise than an AMD card with 10 with last implementations. But that´s because how the hardware is implemented and taking advantage of. Is like constructing a highway that is capable of transporting 100 cars per minute passing through in a fixed place, but because of the bumps, trafic lights and so on, it´s never full and ends up transporting 50 cars per minute. A road capable of 70 cars per minute without bumps and trafic lights  will be faster. Same with Nvidia and AMD. So I guess it´s pretty much the same you are saying. There are bottlenecks that don´t allow the graphic calculation engine to  reach its peak. But we have to compare anyway with something, specially if it is the same company. The comparison will not be scientific, just a number for having an idea and speculate.

AMD's hardware can almost reach it's theoretical floating point limits, just not in gaming.

But even when comparing AMD's hardware against AMD's hardware, flops is a pretty useless metric.

CrazyGPU said:

3- yes, It were your numbers, you said PS3 had 192 Gflops , and that is (rounding it) 0.2 Teraflops or 0.192 to be exact. And playstation 4 has 1.84 Tf.  Sony´s numbers and nobody says the contrary on that. 9x. We are not going 9x from PS4 to PS5.

You rounding it to 0.2 teraflops makes it your number. Mine is 192Gflop.
I never once mentioned the multiples of performance increases we are going to have.

CrazyGPU said:

And Resolution, yes a PS3 has a bandwith of 22,4 for an HD machine compared to a 176 GB/s PS4, Full HD machine, that´s doubling the pixels (1 megapixel to 2. PS5 will cuadruple the pixels to 4k (8.3 Mpix) so It should need even more bandwith difference and it´s not going to have it. Tecniques are going to help but the jump will be smaller.

720P is 921,600 pixels.
1080P is 2,073,600 pixels.

That is an increase of 2.25x. Not a strict doubling.

The Playstation 4 also doesn't implement Delta Colour Compression, some aspects like alpha effects are pretty bandwidth heavy, hence why resolution and bandwidth don't generally share a linear relationship as they increase.

In short though, 25GB/s-50GB/s tends to be the general ballpark for 720P gaming. - 150GB/s-200GB/s for 1080P gaming (Often you can push it to 1440P too).

512GB/s bandwidth as per Vega 64 (483.8GB/s), Geforce 1080 Ti (484GB/s), Titan X (480GB/s), Titan XP (547.7GB/s), RX 2080Ti (616GB/s) all seem to be capable 4k parts on the PC... And that is on top of dramatic increases in general fidelity too.

But if you take the Xbox One X... It has 326GB/s of bandwidth... But thanks to the ROP/Memory Crossbar mis-match can potentially drop to 256GB/s in real-world scenarios.
But it also implements... You guessed it. Delta Colour Compression which potentially brings it's bandwidth up to 456GB/s... But because it's not pushing High or Ultra-PC settings, means it doesn't need to spend as much fillrate on Alpha effects, so can drive up resolutions instead.

CrazyGPU said:

As how effective is going to be a 10-12 Tf machine, how would I know?, I don´t know how it´s going to be fed. I don´t know about the cache amount, The speed of DDR6 memory, the number of schedulers, The ROPS, Texture units, etc. We have to wait to see Navi architecture at least and see how it compare to olders.

Lets get it out of the way.
Graphics Core Next generally is not compute limited. It is often ROP starved, Geometry Starved, Bandwidth Starved... So might as well ignore the Teraflop issue entirely.

As for the rest... Navi is Graphics Core Next It is part of the same Graphics generation as the Xbox One and Playstation 4... And Polaris's successor and not Vega's.

Thus we can surmise what the architecture is likely to have...
It will not exceed 64 CU's.
It will not exceed 64 ROPS.
It will have a 4:1 TMU to CU count.
It will have 1x Command Processor.
It will have 4x Geometry Processors with a substantial increase in throughput thanks to NGG.
It will be built at 7nm.
It will be backed by moderately clocked GDDR6. (Cost is the factor.)
It will have Delta Colour Compression.
It will have Primitive Shaders.
It will have Draw Stream Binning Rasterization.
It will have Primative Discarding capability.

And I could go on... So whilst we don't have confirmation on the final numbers of what the hardware entails, we can still make a very educated guess on what we can expect.

CrazyGPU said:

4- Compression or not, the jump will be less this gen. And I said 800 of uncompressed bandwith, calculated arround 512 GB/s  compressed bandwith with the new tecniques. So what I meant was arround 512 GB/s but effective as 800 GB/s were at PS4 launch. The jump will be smaller than older jumps.

Well. The PC is doing fine with 500-600GB/s of bandwidth before compression comes into play for 4k gaming... That is on top of dramatic increases in visual fidelity over the consoles.
I think engines will continue to rely on dynamic resolution implementations next gen... And various forms of frame reconstruction to get the best bang-for-buck visual presentation.

CrazyGPU said:

5- Well yes, as I said many times, the CPU will be much better and evolution in graphics will continue to be progressive. I don´t agree with 6 gen though. I think from PS2 to PS3 the jump was considerable and more impactfull than from PS3 to PS4. 

Initially the jump from Playstation 2 to Playstation 3 was pretty basic. Heck, most of the Playstation 3's initial E3 was full of up-rezzed PS2 games essentially.

The jump from Xbox to Xbox 360 was even smaller as the Original Xbox was already pushing out games with full pixel shader effects and some titles were in High Definition.

At the end of the day though, I doubt we will agree about next gen being a substantial jump, efficiency has come a long way since the consoles launched in 2013... And those techniques will come into their own at some point next generation.



--::{PC Gaming Master Race}::--

Does anyone else think that pushing further graphical fidelity is a waste?

This console generation has taught me more than others that while nice, pixel counting doesn't exactly equate to better games. It actually doesn't push sales like it once did either. So many games today that are the top sellers are not graphical power houses.

I say this as someone who greatly enjoys cinematic story driven games. The likes of Horizon, Uncharted, The Last of Us, God of War, my top games of the Gen. Yet..... I think we have reached a point where further fidelity is only putting your game farther from being able to make a profit. Minecraft, Fortnite, Everything from Nintendo.....these games are huge and didn't need it. This isn't to say every game need to be the same. There are just certain things that probably need to be pushed more with whatever added power the PS5 will bring.

-Performance: this finally needs to be more of a focus. 60fps and steady. If we can achieve this at 4k, so be it but I no longer care about pixel ratio.


-Features: This to the extreme! One of the best advances of this gen is the now standard video/pic streaming capabilities. Far better use of all that extra RAM. I would like to see more quality of life additions such as this. Like the ability to Run programs in duality. Such as PnP Game and Internet Browser at same time. Think the new Samsung phone method. Maybe a Playstation store that is speedy and runs like a dream.

-Return of Game Features: this is the most important. What if we used that extra power to not go full 4K but bring back Quality features such as Split Screen coop?! Before Pixel Wars it was things like this that really stood out. In particular Nintendo is finding much success with this. Maybe the return of NPC AI in traditional multiplayer games, as there blind focus on courting people for multiplayer has them missing those who still want to play their game but alone.

-VR to further progress is a given. I am very interested to see what the PS5 will do to make VR even better. With stronger tech maybe this will mean less demo-like games and we can get full experiences. No more floating hands. Give me the full Mirror's Edge experience in VR. Imagining games like Cyberpunk in VR.....

I just really feel we need a good gen where we don't try to push any more past 4K and just concentrate on making games that can more easily take advantage of the hardware. Akin to the era of PS2 before devs started going broke over graphics.



      

      

      

Greatness Awaits

PSN:Forevercloud (looking for Soul Sacrifice Partners!!!)

Around the Network

Graphics of next gen will be night and day compared to base X1 and PS4, But the most significant jump will be in terms of Gameplay dynamics, AI, physics, system collision, interaction between characters, interaction between characters and environments, better animations, which also has a great graphical impact when you see a game in action/motion. Graphics are not only "higher pixel count", or more geometry, and prettier textures and effects. So many other factors can determine the general spectacle of a game, and it's always the combo CPU/GPU, it can't be only one component, or comparing GPU teraflops.

Question for Pemalite : what about TSMC Wafer-on-Wafer 3D Stacking Technology ? Any chances it can be implemented for future GPU in 2020/2021 ?



”Every great dream begins with a dreamer. Always remember, you have within you the strength, the patience, and the passion to reach for the stars to change the world.”

Harriet Tubman.

BraLoD said:
Pemalite said: 

My advice... And this is the advice I have given myself for the last several decades is... Buy the best you can afford today, there is always something better around the corner, so don't waste your time waiting, life is to short.

Yeah, I just don't want to miss any major benefits from next gen, but it looks like I won't.

I saw that advice on some sites when people questioned similar questions too, btw, lol.

Thank you, I appreciate it.

As it stands I am currently using a 4k HDR tv with HDMI 2.0. Switched from using projectors back in 2017. I will be upgrading to a 75" or up TV around 2020. I would have done that this year but my current setup is good enough for at least another 18 months. 

So I'll be buying my new TV right along with my PS5 if it comes around november 2020. On the plus side I will be able et a 75" - 85" tv for less then that what I would this year. Oh and electronics are generally cheaper around that time f the year.

If you ask me... I would say just wait.... but ifyou dn have a 4k TV now and wat to et in on the 4k bandwagon.... the find the cheapest best 4k TV out there.... preferably from TCL and call it a day. You could get a god set for under $400. 



Nate4Drake said:
Question for Pemalite : what about TSMC Wafer-on-Wafer 3D Stacking Technology ? Any chances it can be implemented for future GPU in 2020/2021 ?

I think for Next-Gen consoles... They are more than likely to go the monolithic SoC approach... Or the Multi-Chip approach with fabric binding them together.

Wafer on Wafer is still a fairly new concept and could be a bit of a risk... I know Intel is looking into taking on the idea with it's future chips to counter AMD'S chiplet approach, so it will be interesting to see where things go... But considering the industry hasn't pushed 7nm to it's limits yet, there is no rush to jump on that technology just yet.

For a mid-generation console upgrade though, I wouldn't be surprised if they leveraged it.



--::{PC Gaming Master Race}::--

Nate4Drake said:
CGI-Quality said:

There's a problem with using Crysis. Of course it 'shit all over the consoles', it was a PC exclusive, taking advantage of the latest and greatest technology. Modern machines aren't getting closer, they're simply having their goods taken advantage of and games made only for PC, particularly of that flavor, are virtually non-existent.

Although the PC version of Metro: Exodus stomps the console versions into the ground, if it were a PC exclusive, there would truly be no discussion to have. But, go look at the beginning of this gen and then look at the Exodus shots I've posted. It isn't even close. And that's one example. Video game graphics are NOWHERE NEAR their pinnacle.

Finally we go straight to the heart of the matter !   And thanks ;)

 Scalability is not the miraculous solution if we wanna squeeze several SKUs, or an infinite plethora of hardware components with different architecture and specs like on PCs; it is just a way to make everybody "somehow happy", without ever using the available resources at best.

  Just imagine if the best developers could develop exclusively for the best PC hardware; the result would blow away anything you have seen on PC, by a very long margin.  

 Crysis was the perfect example, and still developers could do even more if they could choose the very best components, put together, and build a super powerful Box, like a single SKU to use.

 

I cringe when people start talking about scalability and how it will save ports.

Yes engines and games are scalable, but hardly they make the best game possible to the highest HW and then cut back until they reach the base. They select a floor and make the whole game on that then will start putting more up to the highest HW but usually on a very lazy way.

So base HW will always hold down what is being made on the game. Also once a base is decided to go for a even lower HW will make severe cuts that hardly payback.

Nate4Drake said:
Pemalite said:

Wiipedia is basing that on information direct from nVidia. My own math on determining the GPU's floating capabilities aligns with that too.

It's 192Gflop.

That is just single precision floating point math, there is far more to a GPU than that. Flops isn't accurate when comparing GPU's of different architectures, it's entirely a theoretical denominator, not a real world one.

Streaming assets into DRAM on a per-needs basis is going to be significantly better next-gen, caching is becoming significantly more important, so they can do more with less memory due to that factor.

But you are right, we don't need 88GB of DRAM, not for a substantial increase in fidelity anyway.

Comparing raw numbers is a little disingenuous.
Modern Delta Colour Compression techniques can add another 50% or more to the bandwidth numbers...
Draw Stream Binning Rasterization severely cuts down the amount of work that needs to be performed to start with, making better use of limited bandwidth...
And I could go on.

I disagree, Metro on PC is a night and day difference from consoles.
And the same goes for most Frostbite powered games.

...And the difference is starting to look generational when comparing PC against the base Xbox One and Playstation 4.

Crysis however was a unique one, it was PC exclusive, it pushed PC's to the limits... And it wasn't the best optimized title as Crytek made forward projections on "possible" PC hardware (I.E. Dramatic increases in CPU clockrates) and thus it's taken a very long time for it to stop being a hardware killer.

Consoles today (Xbox One X and Playstation 4 Pro) are generally sitting around the PC's medium quality preset. - It's where the bulk of the 7th gen sat when compared against the PC, obviously resolution and framerates are a bit better this time around of course, but the gap still exists and will always exist.

Well sure. Mostly because as far as general rasterization is concerned... The bulk of the low-hanging performance/graphics fruit has been picked.
But that will at some point end.
I think 10th console generation will be a significant leap in graphics as graphics is undergoing a paradigm shift that we haven't seen since programmable pixel shaders burst onto the scene with the Original Xbox/Geforce 3... The 9th generation will be a bridge to that.

The Xbox One X and Playstation 4 Pro have yet to impress me yet... And the main reason is mostly that games are designed with the base Xbox One and Playstation 4 in mind... If games stuck to a lower resolution and framerate on the Xbox One X and pushed fidelity far more strongly... I would probably be more impressed.

As for Ray Tracing, we might not need 20 Teraflops, 32GB of Ram and 1TB/s of bandwidth, there is a ton of Research going on to make it more efficient, I mean... Rasterization took years to become more efficient, Compression and Culling were the first big techniques to make it more efficient, same thing will happen with Ray Tracing.

Better CPU doesn't guarantee 60fps if you are GPU or DRAM limited. Simulation quality will be amazing, it should result in more immersive worlds overall.

I think with AMD's graphics processors being the weapon of choice for next gen... I think we need to keep expectations in check either way... AMD isn't generally making industry leading high-end hardware.

Thanks Pemalite for all the explanations and clarification;  a common mistake is to compare raw numbers and drawing conclusions without understanding the whole architecture, etc;

  I hope you will keep posting here.

Yep after our dear Pema the most I would use Tflop or other RAW numbers is like a ballpark expectation and mostly when looking at very similar architeture. Otherwise much better evaluate real world implementations.

forevercloud3000 said:

Does anyone else think that pushing further graphical fidelity is a waste?

This console generation has taught me more than others that while nice, pixel counting doesn't exactly equate to better games. It actually doesn't push sales like it once did either. So many games today that are the top sellers are not graphical power houses.

I say this as someone who greatly enjoys cinematic story driven games. The likes of Horizon, Uncharted, The Last of Us, God of War, my top games of the Gen. Yet..... I think we have reached a point where further fidelity is only putting your game farther from being able to make a profit. Minecraft, Fortnite, Everything from Nintendo.....these games are huge and didn't need it. This isn't to say every game need to be the same. There are just certain things that probably need to be pushed more with whatever added power the PS5 will bring.

-Performance: this finally needs to be more of a focus. 60fps and steady. If we can achieve this at 4k, so be it but I no longer care about pixel ratio.


-Features: This to the extreme! One of the best advances of this gen is the now standard video/pic streaming capabilities. Far better use of all that extra RAM. I would like to see more quality of life additions such as this. Like the ability to Run programs in duality. Such as PnP Game and Internet Browser at same time. Think the new Samsung phone method. Maybe a Playstation store that is speedy and runs like a dream.

-Return of Game Features: this is the most important. What if we used that extra power to not go full 4K but bring back Quality features such as Split Screen coop?! Before Pixel Wars it was things like this that really stood out. In particular Nintendo is finding much success with this. Maybe the return of NPC AI in traditional multiplayer games, as there blind focus on courting people for multiplayer has them missing those who still want to play their game but alone.

-VR to further progress is a given. I am very interested to see what the PS5 will do to make VR even better. With stronger tech maybe this will mean less demo-like games and we can get full experiences. No more floating hands. Give me the full Mirror's Edge experience in VR. Imagining games like Cyberpunk in VR.....

I just really feel we need a good gen where we don't try to push any more past 4K and just concentrate on making games that can more easily take advantage of the hardware. Akin to the era of PS2 before devs started going broke over graphics.

Some think, I don't.

You don't have to use all the power just for photo-realism, you can make fantastic and unreal games as well, it is just how you use that power. We are very far from the apex and I want to keep see it improving. At once when I was 15 and played FF IX for the first time (coming from Genesis colorful cartoon games) the CGI blow me up and I thought it couldn't get better, then Tekken5 on PS2 with pores and fur on the demo was unthinkable, then Gran Turismo on PS3 reveal (something like "welcome to real life") was still showing it could get better, Detroit on PS4 shows that well it is almost a real human on the game... but after thinking it couldn't get better so many times now I just want to wait and see how much more can it improve.

Intrinsic said:
BraLoD said:

Yeah, I just don't want to miss any major benefits from next gen, but it looks like I won't.

I saw that advice on some sites when people questioned similar questions too, btw, lol.

Thank you, I appreciate it.

As it stands I am currently using a 4k HDR tv with HDMI 2.0. Switched from using projectors back in 2017. I will be upgrading to a 75" or up TV around 2020. I would have done that this year but my current setup is good enough for at least another 18 months. 

So I'll be buying my new TV right along with my PS5 if it comes around november 2020. On the plus side I will be able et a 75" - 85" tv for less then that what I would this year. Oh and electronics are generally cheaper around that time f the year.

If you ask me... I would say just wait.... but ifyou dn have a 4k TV now and wat to et in on the 4k bandwagon.... the find the cheapest best 4k TV out there.... preferably from TCL and call it a day. You could get a god set for under $400. 

Don't worry to much. There is a possibility the TV is already 2.1 capable but without a standard being defined or in much use they haven't advertised, then with a FW upgrade will allow it just like on PS4. Also great chance that it will take several years before it is being used enough to be an important factor, and by that time you'll be looking at another TV.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Pemalite said:
Nate4Drake said:
Question for Pemalite : what about TSMC Wafer-on-Wafer 3D Stacking Technology ? Any chances it can be implemented for future GPU in 2020/2021 ?

I think for Next-Gen consoles... They are more than likely to go the monolithic SoC approach... Or the Multi-Chip approach with fabric binding them together.

Wafer on Wafer is still a fairly new concept and could be a bit of a risk... I know Intel is looking into taking on the idea with it's future chips to counter AMD'S chiplet approach, so it will be interesting to see where things go... But considering the industry hasn't pushed 7nm to it's limits yet, there is no rush to jump on that technology just yet.

For a mid-generation console upgrade though, I wouldn't be surprised if they leveraged it.

 Thanks !

   For a mid-gen upgrade it could be a really efficient solution.



”Every great dream begins with a dreamer. Always remember, you have within you the strength, the patience, and the passion to reach for the stars to change the world.”

Harriet Tubman.