By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - Tech Talk Part 1: PS4 GPU Compute Explained, and some other stuff, AC;Unity CPU load XB1 ESRAM 204Gb/s..etc

TheAdjustmentBureau said:
Intrinsic said:
TheAdjustmentBureau said:
No offense. But I tryst a developer over a forum poster.

There's a lot of know it all articles on the net who have agendas.

None taken.... You could also just like you know? try and educate yourself over these things. If a firefighter tells you fire isn't gonna burn you if you put your hand in it, you should trust him just becaue he is a fire fighter.

What you are saying is basically that a dev/producer cannot lie about the game they are making. And sorry, what excatly is my agenda here?

Ah well nevermind.


A lot of gamers and articles written by supposed tech sites are always contradicting developers. At the end of the day people like to hate/go against the grain. People often think they know more than people with a multitude of experience using a product from the inside.

 

Ubisoft isnt the first developer ti talk about console CPU being a limiting factor. And they won't be the last. The interenet has recently imploded again with why they demand and think ps4 should be much further ahead of Xbox one. Because frankly we aren't seeing what Sony promised. Same as what Sony promised last gen. Leagues ahead of 360 power etc.

 

I expect this debate to still be going in 2017 when people are still asking why ps4 isn't leagues ahead or why not 1080p. Then cue parity clause bs. When parity has never referred to visuals but release timeframe.

 

So I trust ubisoft, cd projekt and others so far who have said the same thing. But don't waste your time on me. Like I said many online will agree with your opinion. And find tech articles which apparantly no more than the developers.

 

 

Everyone has agendas, whether that is to inform or to make money.

Guess which one the Developers generally have.



In this day and age, with the Internet, ignorance is a choice! And they're still choosing Ignorance! - Dr. Filthy Frank

Around the Network

GPGPU is already a standard in gamedevelopment for creating great looking visual effects. Offloading CPU heavy tasks is common too, just not as much. Some tasks can´t be parallized that easily or are not always that much faster calculated on the GPU (that is, if the task isn´t predestinated to be computed in parallel and has a lot of synchronization work to be done).

If cloud-based gaming becomes the standard in the future, the ps4 may get the short end of the stick. Offloading CPU tasks on the cloud would free-up a lot of other ressources. Offloading CPU tasks on the GPU is limited in some way or another and not always the best solution.

Anyway, this is just talk atm. Imo cloud-based gaming has still a long way to go.



Intrinsic said:

I posted a reply in another thread explaining trying to explain this and figured that I should make a thread so we could discuss this stuff. I considered making a series of threads all starting with "tech talk" where we could discuss certain hardware, software, render, middleware rlated topics in the hope that those that find stuff like this interesting will learn from them. So this is a first of the tech talk series. And pls, anyone is free to make a tech talk thread and talk about anything you tech related in relation to the PS4. XB1, WiiU or PC. 

This talk is primarily focused on PS4 GPU compute, but I touched the Xbox ESRam (lightly cause it deserves its own thread) and I talked about the assasins creed unity 900p thingy. Enjoy.

First of, yes, using compute means that you will have to do less typical GPU tasks cause it runs on the same GPU pipeline. But thats just half the story. There are two ways that compute can be used.

 

  1. render pipeline assisted compute. In this case you would be using just a little bit of the available 64 compute lanes and thus a little of the GPU to handle tasks that would in turn boost the overall yeild and performance of the GPU. In an example Cerny gave;

     say you are running a game that is extremly geometry heavy on the GPU. You can input code that makes a compute pass on the render pipeline that will identify all the front  facing polygons(the ones that the gamer can see) of all geometry objects in the scene and the GPU will then only render those polygons and ommit rendering the rest of them even though its getting the complete geometry render instruction set from the CPU. This way it would be possible to render  3-5 times(not sure how much more but basically more) more polygons than your engine would have typically handled or the GPU will spend less time carrying out that specific task giving it more frame time for other tasks. 

    This could be used in more elaborate ways like marking out certain objects in a scene for AA passes. The small amount of the GPU used to pick out certain aspects in a frame that would'not need an AA pass/shadow detail.....etc will end up freeing up GPU power to be used to do more than it otherwise would have been able to do or basically something else.

  2. Idle time compute. Contrary to what some may think, ALL of the GPU is not active 100% of the time all the time. There are times when only anywhere between 60-80% of the GPU is being used. Now to put this in perspective you need to understand exactly what is being referred to here. 

    Take a game running at 30fps. The CPU/GPU basically has 33ms to render each individual frame. That doesn't mean that the GPU spent 33ms rendering the frame. Don't forget the GPU is just one part of the equation. It had to wait for the updated scene render instruction from the CPU. So for those 33ms, the GPU could very well be idle for anything between 10ms-20ms. That idle time could be used for GPU compute and and take some of the work load off the CPU. Thus letting the CPU finish its work faster and cutting down the GPU idle time from 10-20ms to say 5-10ms. Things like this is how compute on the PS4 can assist the CPu which in turn can lead to either higher or more stable framerates in games. Or just more physics/AI/lighting? calculations all around.

    AC:CREED UNITY SECTION(if you understand the above point with regard to CPU/GPU render times then you will also understnd why what ubisoft said about the CPU lwork oad being responsible for why AC:Unity runs at 900p on both consoles. thats just bullshit. A CPU load just measn that the GPU has less time to do its thing, that is more likely to affect framerate than anything else because of how GPUs work. This is where it gets interesting. IThe only way what ubisoft are saying is true is if the XB1 CPU is better than the PS4s, or that the game is better optimized for the XB1 [which brings us back to the whole parity nonsense]. Its interesting cause if say the CPU heavy load takes 25ms to complete its task for the next frame then passes off the render instructions to the GPU, then theoretically limiting the resolution means that they wanted to give the GPU less work to do so it spit outs the frame on time still hitting that 33ms limit. But this is where there is a problem with the story. If the GPUs have only 8ms to complete its task and the XB1 completes that with a  900p frame, what happend to the 40% more GPU that the PS4 has? The PS4 should be able to complete the exact same task 40% faster than the XB1. So if the time is constant, then it means they could have simply allowed the PS4 do more work and it would have still met that render time limit. Unless of course they want you to believe that the XB1 CPU did completed its much much much faster than the PS4s CPU and that way the PS4s more powerful GPU had less time to render the frame than the XB1 so the extra power of the PS4 went to still mathcing the XB1. Which simply isn't the case)
I hope that clears it up, its not an ok GPU is doing compute so everything must suffer type thing. A lot of work was put into the PS4 architecture to make this assited GPGPU stuff possible and effective, and there is a reason why a lot of devs are saying its going to be a big deal down the road. But everything I just explained here wuld take more work to do, will require that the game code is writen specifically to take advantage of stuff like this. Thats why Cerny doesn't expect them to be using it for another 2-3 years.
I could explain the whole ESRAM thing too... and how the whole 204GB/s two way bandwith is just BS but that will make this post much longer than it is. But i would put it this way. Realistically, you will never need to read and write data to memory simultaneously, especially when the memory in question is used to store the frame buffer (graphic based funtions). When data gets into the frame buffer, the next stop is your display. And then the next frame gets loaded into the frame buffer. If reading and writing simultaneously was such a big deal, why does any one think MS is making such a big deal about new APIs in their SDKs allowing devs to tile data in ESRAM? Also remmebr that the CPU does not have access to ESRAM. 


Love to hear your explanation on esram, I beleive there is more too esram than just a frame buffer



KreshnikHalili said:
GPGPU is already a standard in gamedevelopment for creating great looking visual effects. Offloading CPU heavy tasks is common too, just not as much. Some tasks can´t be parallized that easily or are not always that much faster calculated on the GPU (that is, if the task isn´t predestinated to be computed in parallel and has a lot of synchronization work to be done).

If cloud-based gaming becomes the standard in the future, the ps4 may get the short end of the stick. Offloading CPU tasks on the cloud would free-up a lot of other ressources. Offloading CPU tasks on the GPU is limited in some way or another and not always the best solution.

Anyway, this is just talk atm. Imo cloud-based gaming has still a long way to go.

GPU's are alot faster than CPU's their is just less specialized stuff the GPU can do.

That being said, tasks that can be offloaded from the CPU to cloud, can be offloaded to the GPU, freeing up cloud compute options.

Thus even if cloud-based gaming becomes the standard, PS4 would be able to do both which is still an advantage to only being able to do one. The disadvantage being that MS's Network of Servers will probably be more adapted then modified-gaikai.

The limitations of GPU far exceed the limitations of cloud, espcially latency and the whole Speed of Light thing.



In this day and age, with the Internet, ignorance is a choice! And they're still choosing Ignorance! - Dr. Filthy Frank

TheAdjustmentBureau said:
the-pi-guy said:
TheAdjustmentBureau said:

A lot of gamers and articles written by supposed tech sites are always contradicting developers. At the end of the day people like to hate/go against the grain. People often think they know more than people with a multitude of experience using a product from the inside.

Ubisoft isnt the first developer ti talk about console CPU being a limiting factor. And they won't be the last. The interenet has recently imploded again with why they demand and think ps4 should be much further ahead of Xbox one. Because frankly we aren't seeing what Sony promised. Same as what Sony promised last gen. Leagues ahead of 360 power etc.

I expect this debate to still be going in 2017 when people are still asking why ps4 isn't leagues ahead or why not 1080p. Then cue parity clause bs. When parity has never referred to visuals but release timeframe.

So I trust ubisoft, cd projekt and others so far who have said the same thing. But don't waste your time on me. Like I said many online will agree with your opinion. And find tech articles which apparantly no more than the developers.

Developers often contradict themselves.  

CPU is definitely a limiting factor, but on the frame rate.  What has Sony promised this gen?  

 

Most of the time they confirm what the developers have said, but if you just want to listen to the ones that share your viewpoint even if the developer had said different in the past/present, then that's your choice.  i

I listen to the final product. Infamous used this extra sauce on ps4. And it doesn't look noteably leagues ahead of ac unity, horizon 2 etc. Halo 2 and half life 2 used all of Xbox extra sauce over ps2, but wasn't noteably superior to shadow of collosus or god of war 2. Xbox one is powerful enough. Over 10x more power than the 360. Quantum break looks incredible especially in the detail and effects department and is said by remedy native 1080p.

 

Until I see something that blows away Xbox one noteably on ps4, all this is a lot of hot air.

 

Let the games do the talking. Its the same hot air about cloud computing. In 2018 cloud computing could really help Xbox one visuals. But until I physically see games much better, its just blah blah blah.

 

I'm with you here, Don't forget infamous was pretty empty, max of 24 NPCs on screen due to bottleneck I think I read from the dev, and let's not get started on the Drive Club shambles.

regarding the XB1 it seems it dosent work well with deferred rendered game engines (which is most of them) but give it modern forward+ Engines  like FH2 and ryse to some degree and it performs much better



Around the Network
TheAdjustmentBureau said:
No offense. But I tryst a developer over a forum poster.

There's a lot of know it all articles on the net who have agendas.


I trust educated ppl more than the guys that are trying to sell me the game.



Intrinsic said:

I posted a reply in another thread explaining trying to explain this and figured that I should make a thread so we could discuss this stuff. I considered making a series of threads all starting with "tech talk" where we could discuss certain hardware, software, render, middleware rlated topics in the hope that those that find stuff like this interesting will learn from them. So this is a first of the tech talk series. And pls, anyone is free to make a tech talk thread and talk about anything you tech related in relation to the PS4. XB1, WiiU or PC. 

This talk is primarily focused on PS4 GPU compute, but I touched the Xbox ESRam (lightly cause it deserves its own thread) and I talked about the assasins creed unity 900p thingy. Enjoy.

First of, yes, using compute means that you will have to do less typical GPU tasks cause it runs on the same GPU pipeline. But thats just half the story. There are two ways that compute can be used.

 

  1. render pipeline assisted compute. In this case you would be using just a little bit of the available 64 compute lanes and thus a little of the GPU to handle tasks that would in turn boost the overall yeild and performance of the GPU. In an example Cerny gave;

     say you are running a game that is extremly geometry heavy on the GPU. You can input code that makes a compute pass on the render pipeline that will identify all the front  facing polygons(the ones that the gamer can see) of all geometry objects in the scene and the GPU will then only render those polygons and ommit rendering the rest of them even though its getting the complete geometry render instruction set from the CPU. This way it would be possible to render  3-5 times(not sure how much more but basically more) more polygons than your engine would have typically handled or the GPU will spend less time carrying out that specific task giving it more frame time for other tasks. 

    This could be used in more elaborate ways like marking out certain objects in a scene for AA passes. The small amount of the GPU used to pick out certain aspects in a frame that would'not need an AA pass/shadow detail.....etc will end up freeing up GPU power to be used to do more than it otherwise would have been able to do or basically something else.

  2. Idle time compute. Contrary to what some may think, ALL of the GPU is not active 100% of the time all the time. There are times when only anywhere between 60-80% of the GPU is being used. Now to put this in perspective you need to understand exactly what is being referred to here. 

    Take a game running at 30fps. The CPU/GPU basically has 33ms to render each individual frame. That doesn't mean that the GPU spent 33ms rendering the frame. Don't forget the GPU is just one part of the equation. It had to wait for the updated scene render instruction from the CPU. So for those 33ms, the GPU could very well be idle for anything between 10ms-20ms. That idle time could be used for GPU compute and and take some of the work load off the CPU. Thus letting the CPU finish its work faster and cutting down the GPU idle time from 10-20ms to say 5-10ms. Things like this is how compute on the PS4 can assist the CPu which in turn can lead to either higher or more stable framerates in games. Or just more physics/AI/lighting? calculations all around.

    AC:CREED UNITY SECTION(if you understand the above point with regard to CPU/GPU render times then you will also understnd why what ubisoft said about the CPU lwork oad being responsible for why AC:Unity runs at 900p on both consoles. thats just bullshit. A CPU load just measn that the GPU has less time to do its thing, that is more likely to affect framerate than anything else because of how GPUs work. This is where it gets interesting. IThe only way what ubisoft are saying is true is if the XB1 CPU is better than the PS4s, or that the game is better optimized for the XB1 [which brings us back to the whole parity nonsense]. Its interesting cause if say the CPU heavy load takes 25ms to complete its task for the next frame then passes off the render instructions to the GPU, then theoretically limiting the resolution means that they wanted to give the GPU less work to do so it spit outs the frame on time still hitting that 33ms limit. But this is where there is a problem with the story. If the GPUs have only 8ms to complete its task and the XB1 completes that with a  900p frame, what happend to the 40% more GPU that the PS4 has? The PS4 should be able to complete the exact same task 40% faster than the XB1. So if the time is constant, then it means they could have simply allowed the PS4 do more work and it would have still met that render time limit. Unless of course they want you to believe that the XB1 CPU did completed its much much much faster than the PS4s CPU and that way the PS4s more powerful GPU had less time to render the frame than the XB1 so the extra power of the PS4 went to still mathcing the XB1. Which simply isn't the case)
I hope that clears it up, its not an ok GPU is doing compute so everything must suffer type thing. A lot of work was put into the PS4 architecture to make this assited GPGPU stuff possible and effective, and there is a reason why a lot of devs are saying its going to be a big deal down the road. But everything I just explained here wuld take more work to do, will require that the game code is writen specifically to take advantage of stuff like this. Thats why Cerny doesn't expect them to be using it for another 2-3 years.
I could explain the whole ESRAM thing too... and how the whole 204GB/s two way bandwith is just BS but that will make this post much longer than it is. But i would put it this way. Realistically, you will never need to read and write data to memory simultaneously, especially when the memory in question is used to store the frame buffer (graphic based funtions). When data gets into the frame buffer, the next stop is your display. And then the next frame gets loaded into the frame buffer. If reading and writing simultaneously was such a big deal, why does any one think MS is making such a big deal about new APIs in their SDKs allowing devs to tile data in ESRAM? Also remmebr that the CPU does not have access to ESRAM. 

I mean yeah you pretty much summed up all of this stuff.  People need to understand that the PS4's advantage doesn't just come from having 50% more SP's.  It also has:

-Over double the practical bandwidth

-A lot of optimizations for compute (Hence Ubisoft's latest tests showing the PS4 is twice as good at compute as the X1)

-No ESRAM bottleneck limiting how much detail can be squeezed into each frame.

 

The difference between the two consoles isn't slowing down much at all.  Just prepare for UC4 to make Halo 5 look half a generation behind.



Dark_Feanor said:
Intrinsic said:

Exactly. I have quickly learnt tht people that are too quick to dismiss anything are usually the ones that have agendas. Especially when its so easy to get valid and accurate information these days.

Whats really funny is that everything I have said are things that developers have said themselves, or that anyone with a snippet of coputer knowlege can see for themselves with just a little research. 

I would really have loved him to point out whatever it was that I said that he felt was not true. But he didn't so I didn't bother explaining. 


I think, now I miss your point again.

Are we talking about the 2 years in the future games or the one shipping right now?

You still sound as there are some conspiracy for AC-U parity resolution. And baisicaly ignore all other games that run at the same resolution and very close performance.


Nope. Not talking about that at all. Another guy just said something about only believing devs... so we showed him what devs were saying that corresponded with what I am saying here too. I think I have explained to you why the resolution is the same on AC;Unity. It has nothing to do with a conspiracy and it has nothing to do with performace. That reason alo applies to every other game that falls nto that category.



Callum_Alexander said:
Thanks for this thread, it's really informative! From the little I know of game development/programming I knew Ubisoft were lying, but it's good to see it explained in detail by somebody who obviously knows their stuff. On a side note, is there anybody that can confirm what the OP says is true, I always prefer to get a second opinion to see how they compare. Don't take that as an insult OP, I just like to be certain!

Its all good. Its a forum after all. My views/explanations are meant to be challenged. But all the info I put here can be confirmed from numerous articles and interviews with devs and PDF dev documenst and from my own person experience with computer knowlege. All I have done is just gather all that info up and present it in an easy to understand and hopefully fun to read way.

I just did part two by the way.



TheAdjustmentBureau said:

I listen to the final product. Infamous used this extra sauce on ps4. And it doesn't look noteably leagues ahead of ac unity, horizon 2 etc. Halo 2 and half life 2 used all of Xbox extra sauce over ps2, but wasn't noteably superior to shadow of collosus or god of war 2. Xbox one is powerful enough. Over 10x more power than the 360. Quantum break looks incredible especially in the detail and effects department and is said by remedy native 1080p.

 

Until I see something that blows away Xbox one noteably on ps4, all this is a lot of hot air.

 

Let the games do the talking. Its the same hot air about cloud computing. In 2018 cloud computing could really help Xbox one visuals. But until I physically see games much better, its just blah blah blah.

 

You are derailing this thread. You are turning it into some sort of fanboy war. All i have done here is talk abut the tech of GPGPU and how it can be used in PS4 games. I did not once say how it will make PS4 games leauges better than XB1 games. I even admitted that third parties probably will never use it. And then I talked a bit aout AC:unity and exactly why the eason sthey gave for parity were stupid. I never said a game cannot be cpu bound. I just cited examples that I already used when explaining how GPGPU compute works to prove a point.

Do not make this a PS4 vs XB1 thread please.

Techmaster said:


Love to hear your explanation on esram, I beleive there is more too esram than just a frame buffer

Funny you said that.. I just made part two. And its based on ESRAM.