Ascended_Saiyan3 said:
You need to read my post at the bottom of page 16. The one talking about the GDC (Game Developers Conference) 2009. The PhyreEngine is available for ALL PS3 developers. You are talking about the past. Remember, games are in development for quite some time (2 to 3 years). I guess you could blame Sony for coming out with a new concept to push gaming forward. New concepts ALWAYS requires a learning period. A LOT of these developers just didn't want to learn how to parallelize their code. Now they are starting to understand that learning the Cell helps them on PC with mulit-core programming. Bad code is their worse enemy. |
I read that post and the slides. From what i read it does not provide an API that hides the need to plan your data structures and execution path around parallel processing but provides you an API to reduce the grunt work of delegating some rendering tasks to the SPUs. You will still need to rework existing engines to break an existing process down that is monolithic now and reassemble it before final rendering.
To get concrete they give an example of delegating depth of field preprocessing to an SPU then use the data from that to apply an appropriate blur using the GPU. But to do this you will need to coordinate a split in your execution path after the depth buffer is populated then find something for the CPU/SPUs and GPU to be doing while that is going on that will not affect the depth buffer and also is not dependent on the depth of field data. Once the preprocessing is done you need to merge your execution paths and apply the focus data returned and forward the complete information to the GPU for rendering. Figuring out what tasks can be done in parallel is the hard part, not handing a task of to another processor. If the GPU is not still rendering the last frame there is really not much it can do because you cannot have it modify vertex data because that would potentially change the depth buffer and you cannot have it apply any pixel effects because because it will just need to do it again once it has the depth of field data. Obviously you can be doing AI, IO, triangle level lighting, game mechanics, etc in the SPUs/CPU but you could be doing that if the GPU was doing this calculation too. There are lots of ways you could take advantage of this but they still require you to understand the underlying architecture and to design your application accordingly.
What developers need is an engine that provides them a model where they do not need to make hard decisions often, where they can continue to think at a high level of abstraction about their game. To give a real world non technical example think of Mc Donalds, or any local fast food restaurant. You place your order at a high level of abstraction, you say I want a combo meal #1. The order is placed and relevant parts are routed in parallel to the grill cook, the fry cook, the drink machine, and the order assembler. All the parts arrive at the assembly area and are put in a bag and handed to you quicker then they would be if one person went and did the entire process. A central shared team did the complex thinking about the problem and figured out the optimal way to process a menu order in parallel so the people crank out the order do not need to think hard.
My impression of where the game development world is going is toward this model. Not because developers are money hungry but because they cannot afford to build these engines just for their own use. Yes the PS3 dev kit continues to make progress toward this goal, as do third parties like unreal engine, but it does not appear that we are there yet in terms of fully leveraging the PS3 hardware.
And before anyone goes and makes the upscale restaurant counter example, i will believe that when I see high end games selling for ten times what the basic games do and selling to smaller audiences, because that is the real parallel (no pun intended :).







