By using this site, you agree to our Privacy Policy and our Terms of Use. Close

deneidez

Some engines can still have the same code from 90's or early 2000 because it just works and the way of PS3 isn't the same as homogeneous platform way. Sure there are more cores, but you need to use them differently.

...

Or they are they just more farsighted and realistic by not learning type of platform that might last only one generation? I mean there is no guarantee that PS4 will use CELL or anything like it. I can remember all the fuzz with Emotion Engine and how it was incredible, it was going to be used forever, even Saddam tries to get it etc. And now we have another generation with CELL and same show has started again. What can you do with the knowledge of optimizing EE now? Use your knowledge to make another PS2 game... :)

 

 

 

You have pointed out the challenge vs the promise of PS3 and Amiga architectures.  In the mid-80's I was a huge Amiga fan and remember the period with a smile.  That machine was definitely ahead of its time, capability wise.

The problem was that the architecture wasn't easily scalable like the PC/Mac architectures.  What the Amiga did with finesse via a very elegant architecture (Jay Miner was a hardware genius), the PC and Mac eventually accomplished via brute force ability.  Had another company besides Commodore owned the technology, the Amiga would likely have lived much longer.  However, I have no doubt that it would have eventually faded in prominence.  The Amiga's unique and well orchestrated hardware allowed it to do what was then considered amazing, but which would now be considered mundane.  By the 90's, a middle of the road PC could emulate in software all the customized hardware in the Amiga several times over, without breaking a sweat.

The Cell processor seems to follow a philosophy similar to that of the Amiga design: provide a significant leap in potential capability by taking a very customized, elegant approach.  But it trades ease of programming for raw performance that must be hard won by programmers.  The Amiga was fairly easy to program if you wanted to write a VT100 emulator or a very simple game using built in objects such as sprites, blobs, etc. (I created both kinds of programs for fun).  But the most compelling games on the Amiga required a programmer to dig very deeply into the lower levels of the APIs.  This required quite a bit of work, particularly to get the blitter, copper, etc. to do eveything a programmer wanted and to get it all to sync up properly.  The PS3 is the same way from what I can tell... you can create a simple game fairly easily, but really tapping the hardware can take a lot of work.  Using 4 cores on a modern PC isn't too much more difficult than using 2 cores.  In the future, using 8 cores probably won't be much more difficult than using 4 cores.  The approaches tend to be scalable.

If IBM can find a way to scale the Cell architecture so that each successive generation of CPU can be used by programmers without having to do a lot of code redesign, then they might have a shot at making the Cell mainstream.  Video cards already do this through very sophisticated APIs and drivers that hide the inner-workings of the GPUs from the application to a large degree.  If a game programmer didn't have Direct X or Open GL as an abstraction layer, then he would have to code a part of his app to handle the many video cards in the market, an the PC gaming market would suffer as a result.  IBM/Sony need to find a way to do the same with the Cell so that the power of the processor can be tapped without having to worry about which flavor/generation of Cell is installed in a device.