jonathanalis said:
Oh, thank you.
Seems pretty simple, can run in linear time.
So simple that I think im going to try it out on matlab. Get a 4k image, select the red pixels, the orange pixels calculate the green pixels and form an image. And I can compare with the original 4k image with a image quality metric. I could compare too with the 1080p and 1440p versions of the original.
(im more a image processing guy than computer graphics, so I think thats all I can do)
I think I will present in an new topic, but I let ou know when it is ready.
But for this horizon shot, I had to zoomed it a lot to see some jaggies, but I dont know if some of them are cos of jpg compression or checkerboard rendering. But I thing I cannot perceive from more than 1,5 meters from TV.
|
It's a bit more sophisticated than that gif suggests.
The ps4 pro automatically generates an ID buffer while rendering which helps with anti aliasing
www.eurogamer.net/articles/digitalfoundry-2016-inside-playstation-4-pro-how-sony-made-a-4k-games-machine
Right now, post-process anti-aliasing techniques like FXAA or SMAA have their limits. Edge detection accuracy varies dramatically. Searches based on high contrast differentials, depth or normal maps - or a combination - all have limitations. Sony had fashioned its own, highly innovative solution.
"We'd really like to know where the object and triangle boundaries are when performing spatial anti-aliasing, but contrast, Z [depth] and normal are all imperfect solutions," Cerny says. "We'd also like to track the information from frame to frame because we're performing temporal anti-aliasing. It would be great to know the relationship between the previous frame and the current frame better. Our solution to this long-standing problem in computer graphics is the ID buffer. It's like a super-stencil. It's a separate buffer written by custom hardware that contains the object ID."
It's all hardware based, written at the same time as the Z buffer, with no pixel shader invocation required and it operates at the same resolution as the Z buffer. For the first time, objects and their coordinates in world-space can be tracked, even individual triangles can be identified. Modern GPUs don't have this access to the triangle count without a huge impact on performance.
"As a result of the ID buffer, you can now know where the edges of objects and triangles are and track them from frame to frame, because you can use the same ID from frame to frame," Cerny explains. "So it's a new tool to the developer toolbox that's pretty transformative in terms of the techniques it enables. And I'm going to explain two different techniques that use the buffer - one simpler that's geometry rendering and one more complex, the checkerboard."
<snip>
Checkerboarding up to full 4K is more demanding and requires half the basic resolution - a 1920x2160 buffer - but with access to the triangle and object data in the ID buffer, beautiful things can happen as technique upon technique layers over the base checkerboard output.
"First, we can do the same ID-based colour propagation that we did for geometry rendering, so we can get some excellent spatial anti-aliasing before we even get into temporal, even without paying attention to the previous frame, we can create images of a higher quality than if our 4m colour samples were arranged in a rectangular grid... In other words, image quality is immediately better than 1530p," Cerny explains earnestly.
"Second, we can use the colours and the IDs from the previous frame, which is to say that we can do some pretty darn good temporal anti-aliasing. Clearly if the camera isn't moving we can insert the previous frame's colours and essentially get perfect 4K imagery. But even if the camera is moving or parts of the scene are moving, we can use the IDs - both object ID and triangle ID to hunt for an appropriate part of the previous frame and use that. So the IDs give us some certainty about how to use the previous frame. "