trasharmdsister12 said:
There seems to be some confusion here so I'll try to basically explain what this is and what it means for any future games that may utilize it as well as performance and graphical fidelity changes it may bring. In Layman's terms anti-aliasing is the process of smoothing out jagged edges on an image (in this case a rendered frame from the game engine) to improve image quality. It has less to do with graphical fidelity and effects than it does with Image Quality (IQ) which define how clear, defined, and smooth an image appears. Morphological anti-aliasing is just another method of doing so and depending on the source image it will produce a different result than multi-sample anti-aliasing (MSAA); sometimes better and sometimes worse. To answer your first question it is a software thing. It's mathematical calculations run on an image to ideally locate and remove jaggies one way or another. Now the PS3 uses the Cell's advantageous heavily parallel architecture and specialized processing units to achieve this task very efficiently. Until recently the manner in which this process was accomplished was unfit for more standard hardware as found in the PC and 360. What these folk have done is "port" these tasks to allow them to be run on the GPU (which are incredibly parallel in this day and age) instead of the CPU. The PS3's architecture and GPU didn't allow for simple implementation of the standard MSAA technique so they opted for MLAA, which IMO has produced mixed results based on its use (absolutely perfect use in God of War 3 but really mucked things up in Killzone 3). Again that all depends on the source image (things like art, contrast, colours in use, shapes of edges) and any technique can outperform others given certain conditions or dealing with certain patterns. The 360 never really needed to use MLAA as it could *potentially* get "free" MSAA from the eDRam (this is sorta getting off topic so I'm not gonna elaborate at the moment) and the results were generally good enough. With MLAA requiring ~2.5 ms of the GPU's full attention for the process to take place (assuming this is a standard sized image for the system), that would leave ~30 ms for the rest of the processing by the game engine requiring the GPU for a game running at 30 FPS. That isn't a huge hit but it is a hit nonetheless so I'd assume developers will opt for the "free" MSAA where they can. If they can't though then I'll be interested to see how they choose to implement this technique and work their engine to take advantage of any benefits it may bring to their engine's work-flow setup.
Also this is the third time in the last month that I've seen an article on this on VGChartz. |
Thanks for sharing. This was a good explanation on what MLAA actually does.








