By using this site, you agree to our Privacy Policy and our Terms of Use. Close
CGI-Quality said:
Pemalite said:

With that said... You reach a point where every time you double your "graphics" quality, you need multiples more capable hardware.

This image explains it really well:
10x increase in polygons, you reach a point where an increase isn't as dramatic anymore.

This is the point I've been trying to drive home all gen. While the average person sees the PS4/X1 and says ("weakest gen ever"), they don't understand the ins and outs. As this law applies more and more, the same jump in graphics simply isn't possible anymore. 

Then again, take a look at a well done, fully pre-rendered scene in 2016. Real time visuals are, still, nowhere close. When our games start looking like that, then we can right off current generation efforts as 100% minimal. And this comes from someone that works internally with this stuff. 

But, I'll also say this, the games coming in the next 5 years, particularly, are about to be real lookers.

I think the major advantage this generation is the "smaller" details.
Everything has geometry, not flat surfaces which "look" bumpy, particles,  smoke and other effects typically aren't 2D sprites anymore either (Except for Dragon Age: Inquisition, but that was cross-plat.).
Shadowing, texturing, shaders all got a bump, which is to be expected... Everything is dynamic, not pre-calculated/baked.

To put things into perspective... Starting with the Geforce 5 right to the Geforce 6, 7, 8, 9 and 200 series... Geometry performance increased by 3x, where-as shader/compute performance increased by 150x.
The Geforce series after that, the Geforce 400 series increased it by a farther 8x, roughly inline with the Radeon in the PS4, probably take awhile for developers to use that capacity the most optimal way though for the largest visual impact, which we are starting to see in games coming out over in the next few years.



--::{PC Gaming Master Race}::--