dahuman said:
fatslob-:O said:
dahuman said:
fatslob-:O said:
dahuman said:
Anyways, to put this thread back on topic, and end the discussion as to why MS is not so retarded to use the GPU in this way, also to why the article is a pile of dumb ass shit, all whie in more layman's terms. GPU Accelerated video encoding is a terrible idea because it takes power away from games as it's using the customizable pipelines(GPU power in layman's term) to run code using whatever API to do the job, in retrospect, Intel's way of doing it is much better and that's most likely what the X1 and PS4 are both doing, which would be either a.) putting a fixed pipeline(low power consumption, the sole purpose would be to encode the video output for live streaming or saving them to the HDD at a small reasonable file size after compression for uploading to services like youtube) right in the GPU at a little corner or b.) use a seperate chip on the side for the same purpose. Using the actual GPU rendering pipelines for video encoding in this case would be absolutely retarded(I'd rather them using it for good physics if nothing else, they'd be wasting the idea behind GCN otherwise) and I don't want to believe that MS is that stupid nor do I think they are.
So that article is shit based on good logic, but I've seen some pretty bad logics from corporations as well so you never know......
|
Good read and all but from what I heard hasn't AMD always used fixed logic for video encoding via AMD's UVD and VCE.
GCN was a good reason for physics and all but I'd be willing to bet that it was more meant for Tiled Forward Plus Rendering via that AMD LEO demo that they showcased.
|
UVD is a decoder that works in conjuntion with the GPU pipelines when doing post processing work and is the reason why I used Vista even though Vista was a terrible OS on so many fronts (needed that EVR, CPUs used to be terrible at decoding HD videos. These days though, who gives a shit lol, CPU decoding is just fine and often has better quality due to software.) The encoding aspect of avivo is also based on SPUs since it's done via all that parallel processing but is really limiting due to hardware implementation(reason GCN was introduced) for OpenCL. The point is they both eat GPU power in some way, whereas Intel's way is no load on the GPU or CPU if you use their quicksync part embedded in their iGPU for encoding, think in terms of a Wii U transfering images to the Wii U gamepad, it has a dedicated encoder that goes through wireless 5ghz right to the gamepad to get decoded, only in the case of X1 and PS4, it'd be used for live streaming or youtube, and the quality will suck unless they put in a chip that can do crazy quality at good bitrates.
|
You'd be surprised about intel quick sync though, it has the fastest fixed solution while also having better quality than those dedicated GPU's lol, I guess thats epic fail on nvidia and amd's part.
|
That will change once x265 is in full working order, it's still young but it has beasty potentials. also read my edit lol.
|
Oh my post wasn't asking about GCN being able to do crazy physics no but that is where the next generation APU's like kaveri and maybe PS4's could come in handy but I was just saying that AMD's prupose for releasing GCN on a graphics front not from compute standpoint was to show off forward rendering and not deferred rendering where amd and nvidia are pretty much on equal footing but to gain in advantage in games such as dirt showdown with an advanced lighting system to move back towards forward rendering in order to pull the benchmarks in favor of amd radeon cards.