By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Ascended_Saiyan3 said:
billsalias said:



Two things many people seem to forget in these threads:

First, developers need to make money to stay in business and major changes in how you develop software is a significant cost. Yes the PS3 has potential to do some amazing things with it's architecture but it requires training and effort to tap it that no other systems do. Now if the PS3 had been released with an SDK that hid a lot of the complexity and allowed developers to tap this power without having to significantly change how they work then the PS3 would have been a undeniable technical success. But it was not so developers are left with the difficult choice of investing to get up to speed on a new way of doing things or not leverage that power in an optimal way.

Second, even if you want to invest in becoming a "real" PS3 developer finding engineers that can handle the concepts involved in developing for this architecture is difficult, hell it is hard finding engineers that are decent at multi threading. I have worked in software development for 20 years building multi tiered client server applications on parallel architectures. The hardest part of my job has been finding people that understand the concepts involved well enough do the work at an acceptable level. This is not just a matter of going to amazon and buying a book on parallel processing and spending a week playing around with it, there are some difficult concepts involved that are just beyond most peoples grasp without significant education. Even then a lot of people will just never be able to think in the way required to be good at this sort of thing.

If you combine these factors with how the market share breaks down it is not a good business decision for anyone but a first party developer to become good at PS3 development. Just learn enough to get performance to match the 360 version.

So my point of all this is don't blame the developers for not investing in learning the PS3, blame Sony for making them chose between staying in business and taking advantage of the potential of PS3.


 

You need to read my post at the bottom of page 16. The one talking about the GDC (Game Developers Conference) 2009. The PhyreEngine is available for ALL PS3 developers. You are talking about the past. Remember, games are in development for quite some time (2 to 3 years). I guess you could blame Sony for coming out with a new concept to push gaming forward. New concepts ALWAYS requires a learning period. A LOT of these developers just didn't want to learn how to parallelize their code. Now they are starting to understand that learning the Cell helps them on PC with mulit-core programming. Bad code is their worse enemy.


I read that post and the slides. From what i read it does not provide an API that hides the need to plan your data structures and execution path around parallel processing but provides you an API to reduce the grunt work of delegating some rendering tasks to the SPUs. You will still need to rework existing engines to break an existing process down that is monolithic now and reassemble it before final rendering.

To get concrete they give an example of delegating depth of field preprocessing to an SPU then use the data from that to apply an appropriate blur using the GPU. But to do this you will need to coordinate a split in your execution path after the depth buffer is populated then find something for the CPU/SPUs and GPU to be doing while that is going on that will not affect the depth buffer and also is not dependent on the depth of field data. Once the preprocessing is done you need to merge your execution paths and apply the focus data returned and forward the complete information to the GPU for rendering. Figuring out what tasks can be done in parallel is the hard part, not handing a task of to another processor. If the GPU is not still rendering the last frame there is really not much it can do because you cannot have it modify vertex data because that would potentially change the depth buffer and you cannot have it apply any pixel effects because because it will just need to do it again once it has the depth of field data. Obviously you can be doing AI, IO, triangle level lighting, game mechanics, etc in the SPUs/CPU but you could be doing that if the GPU was doing this calculation too. There are lots of ways you could take advantage of this but they still require you to understand the underlying architecture and to design your application accordingly.

What developers need is an engine that provides them a model where they do not need to make hard decisions often, where they can continue to think at a high level of abstraction about their game. To give a real world non technical example think of Mc Donalds, or any local fast food restaurant. You place your order at a high level of abstraction, you say I want a combo meal #1. The order is placed and relevant parts are routed in parallel to the grill cook, the fry cook, the drink machine, and the order assembler. All the parts arrive at the assembly area and are put in a bag and handed to you quicker then they would be if one person went and did the entire process. A central shared team did the complex thinking about the problem and figured out the optimal way to process a menu order in parallel so the people crank out the order do not need to think hard.

My impression of where the game development world is going is toward this model. Not because developers are money hungry but because they cannot afford to build these engines just for their own use. Yes the PS3 dev kit continues to make progress toward this goal, as do third parties like unreal engine, but it does not appear that we are there yet in terms of fully leveraging the PS3 hardware.

And before anyone goes and makes the upscale restaurant counter example, i will believe that when I see high end games selling for ten times what the basic games do and selling to smaller audiences, because that is the real parallel (no pun intended :).