fatslob-:O said:
Good read and all but from what I heard hasn't AMD always used fixed logic for video encoding via AMD's UVD and VCE. GCN was a good reason for physics and all but I'd be willing to bet that it was more meant for Tiled Forward Plus Rendering via that AMD LEO demo that they showcased. |
UVD is a decoder that works in conjuntion with the GPU pipelines when doing post processing work and is the reason why I used Vista even though Vista was a terrible OS on so many fronts (needed that EVR, CPUs used to be terrible at decoding HD videos. These days though, who gives a shit lol, CPU decoding is just fine and often has better quality due to software.) The encoding aspect of avivo is also based on SPUs since it's done via all that parallel processing but is really limiting due to hardware implementation(reason GCN was introduced) for OpenCL. The point is they both eat GPU power in some way, whereas Intel's way is no load on the GPU or CPU if you use their quicksync part embedded in their iGPU for encoding, think in terms of a Wii U transfering images to the Wii U gamepad, it has a dedicated encoder that goes through wireless 5ghz right to the gamepad to get decoded, only in the case of X1 and PS4, it'd be used for live streaming or youtube, and the quality will suck unless they put in a chip that can do crazy quality at good bitrates.
ps: I'm not asking them to do crazy ass physics, even dedicate a little bit into it can do wonders, if we are working with consoles, we can tax the hardware in different ways(GPU+CPU in conjunction, for calculation purposes, the bandwidth requirement are really not super high) since the resource won't change, whereas PC can just brute force everything.








