Norion said:
Since about 75% of PS5 users go for performance mode I dunno about that but then again it seems my bf didn't notice much difference when he tried out my PC. I could get used to 60hz again with little issue if I was forced to use that but when setting it back to over 100 I'd probably feel amazed at the jump in smoothness all over again.
I'll likely be upgrading my monitor in the next 1-2 years so I'll make the jump from 1440p 144hz to 4k 240hz which should be plenty for close to a decade. Also I wonder just how much GPU power it would take to run something like maxed out Cyberpunk at that fps lol. |
That's where frame generation comes in. There's really not much point to rendering the entire frame 240 times per second. What you need is movement vectors for elements on screen to generate the in between frames. GT7 does that now in VR on PS5 Pro to simulate 120fps from 60fps. DLSS 3 does that on PC.
That doesn't reduce input lag further of course, yet the difference between 120 fps 8.3ms and 240 fps 4.2ms won't make a difference. The human limit is 13ms, which corresponds to 77 fps.
https://www.pubnub.com/blog/how-fast-is-realtime-human-perception-and-technology/
Humans can respond to incoming visual stimuli at a maximum rate of about 13 milliseconds. Any data stream faster than this would surpass our perceptual limits.
Anything over 90fps only looks more stable while tracking/following moving objects with your eyes.
(Response time is 150-200 ms for humans so the whole round trip between perceiving and pressing shoot/jump is already mostly on the human side)
I also ran into this article, the human brain does all sorts of forward prediction
https://www.sciencealert.com/to-help-us-see-a-stable-world-our-brains-keep-us-15-seconds-in-the-past#:~:text=Instead%20of%20seeing%20the%20latest,time%20can%20help%20stabilize%20perception.
While you can quickly respond to changes in the center of your vision, the total picture you live in is an amalgamation of the last 15 seconds!
From this you can also see how Eye Tracked Foveated Rendering is the big cost saver for VR rendering. It would help with monitors as well of course. It's pretty wasteful to render the entire screen at 4K resolution when you only see sharp detail in a little dot. Perhaps VR improvements will come back to flat screen monitors. A large 27" 4K monitor with eye tracking sensors could save tons of GPU power.
This is how your visual acuity drops off away from the center![]()
Sitting on top of a PC monitor it fills up to 60 degrees of your fov. 30 degrees from the center of your eye you're down to 10% or 9 pixels per degree. (or 6 based on 20/20 vision, 60 corresponding to 20/20 vision, 90 the limit at which people can perceive a difference on screen)
9 pixels per degree over 60 degrees corresponds to only 540x303 for the whole screen. Yet game engines put as much effort into rendering where you aren't looking as where you are.
With eye tracking that can tell how far away you are sitting and where you are looking (plus the size of the screen), you can optimize the render resolution and add foveated rendering to save on pixel pushing and increase fps instead. Resolutions in games could then be defined in pixels per degree rather than set values that look different depending on screen size and distance.
Render smarter, not harder :)







