By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Developer explains what it’s like developing for each console from PSOne to X360 - PS3 being the hardest

Cory Bloyd who works at Munkeyfun studio has revealed how hard/easy it was developing for different consoles ranging from N64 to the PS3. And you guessed it right, the PS3 was the hardest.
“I’ll add that even though I give Sony a hard time, I really do enjoy pounding on their machines. Sony consoles have always been a challenge. But, if you are willing to work with them instead of against them, they love you back tenfold,” he said.
He revealed these things on Reddit, where he goes by the handle Corysama. You can check out his entire explanation below.
How can something so elegant be so hard to develop for?

PlayStation 1

Everything is simple and straightforward. With a few years of dedication, one person could understand the entire PS1 down to the bit level. Compared to what you could do on PCs of the time, it was amazing. But, every step of the way you said “Really? I gotta do it that way? God damn. OK, I guess… Give me a couple weeks.” There was effectively no debugger. You launched your build and watched what happened.

N64

Everything just kinda works. For the most part, it was fast and flexible. You never felt like you were utilizing it well. But, it was OK because your half-assed efforts usually looked better than most PS1 games. Each megabyte on the cartridge cost serious money. There was a debugger, but the debugger would sometimes have completely random bugs such as off-by-one-errors in the type determination of the watch window (displaying your variables by reinterpreting the the bits as the type that was declared just prior to the actual type of the variable —true story).


Dreamcast
The CPU was weird (Hitatchi SH-4). The GPU was weird (a predecessor to the PowerVR chips in modern iPhones). There were a bunch of features you didn’t know how to use. Microsoft kinda, almost talked about setting it up as a PC-like DirectX box, but didn’t follow through. That’s wouldn’t have worked out anyway. It seemed like it could be really cool. But man, the PS2 is gonna be so much better!

PS2

You are handed a 10-inch thick stack of manuals written by Japanese hardware engineers. The first time you read the stack, nothing makes any sense at all. The second time your read the stack, the 3rd book makes a bit more sense because of what you learned in the 8th book. The machine has 10 different processors (IOP, SPU1&2, MDEC, R5900, VU0&1, GIF, VIF, GS) and 6 different memory spaces (IOP, SPU, CPU, GS, VU0&1) that all work in completely different ways. There are so many amazing things you can do, but everything requires backflips through invisible blades of segfault. Getting the first triangle to appear on the screen took some teams over a month because it involved routing commands through R5900->VIF->VU1->GIF->GS oddities with no feedback about what your were doing wrong until you got every step along the way to be correct. If you were willing to do twist your game to fit the machine, you could get awesome results. There was a debugger for the main CPU (R5900). It worked pretty OK. For the rest of the processors, you just had to write code without bugs.

GameCube

I didn’t work with the GC much. It seems really flexible. Like you could do anything, but nothing would be terribly bad or great. The GPU wasn’t very fast, but it’s features were tragically underutilized compared to the Xbox. The CPU had incredibly low-latency RAM. Any messy, pointer-chasing, complicated data structure you could imagine should be just fine (in theory). Just do it. But, more than half of the RAM was split off behind an amazingly high-latency barrier. So, you had to manually organize your data in to active vs bulk. It had a half-assed SIMD that would do 2 floats at a time instead of 1 or 4.

PSP

Didn’t do much here either. It was played up as a trimmed-down PS2, but from the inside it felt more like a bulked-up PS1. They tried to bolt-on some parts to make it less of a pain to work with, but those parts felt clumsy compared to the original design. Having pretty much the full-speed PS2 rasterizer for a smaller resolution display meant you didn’t worry about blending pixels.

Xbox:
 
Smells like a PC. There were a few tricks you could dig into to push the machine. But, for the most part it was enough of a blessing to have a single, consistent PC spec to develop against. The debugger worked! It really, really worked! PIX was hand-delivered by angels.

Xbox360:
 
Other than the big-endian thing, it really smells like a PC —until you dug into it. The GPU is great —except that the limited EDRAM means that your have to draw your scene twice to comply with the anti-aliasing requirement? WTF! Holy Crap there are a lot of SIMD registers! 4 floats x 128 registers x 6 registers banks = 12K of registers! You are handed DX9 and everything works out of the box. But, if you dig in, you find better ways to do things. Deeper and deeper. Eventually, your code looks nothing like PC-DX9 and it works soooo much better than it did before! The debugger is awesome! PIX! PIX! I Kiss You!

PS3:
A 95 pound box shows up on your desk with a printout of the 24-step instructions for how to turn it on for the first time. Everyone tries, most people fail to turn it on. Eventually, one guy goes around and sets up everyone else’s machine. There’s only one CPU. It seems like it might be able to do everything, but it can’t. The SPUs seem like they should be really awesome, but not for anything you or anyone else is doing. The CPU debugger works pretty OK. There is no SPU debugger. There was nothing like PIX at first. Eventually some Sony 1st-party devs got fed up and made their own PIX-like GPU debugger. The GPU is very, very disappointing… Most people try to stick to working with the CPU, but it can’t handle the workload. A few people dig deep into the SPUs and, Dear God, they are fast! Unfortunately, they eventually figure out that the SPUs need to be devoted almost full time making up for the weaknesses of the GPU.


Around the Network
Gilgamesh said:
Cory Bloyd who works at Munkeyfun studio has revealed how hard/easy it was developing for different consoles ranging from N64 to the PS3. And you guessed it right, the PS3 was the hardest.
“I’ll add that even though I give Sony a hard time, I really do enjoy pounding on their machines. Sony consoles have always been a challenge. But, if you are willing to work with them instead of against them, they love you back tenfold,” he said.
He revealed these things on Reddit, where he goes by the handle Corysama. You can check out his entire explanation below.
How can something so elegant be so hard to develop for?

PlayStation 1
:
Everything is simple and straightforward. With a few years of dedication, one person could understand the entire PS1 down to the bit level. Compared to what you could do on PCs of the time, it was amazing. But, every step of the way you said “Really? I gotta do it that way? God damn. OK, I guess… Give me a couple weeks.” There was effectively no debugger. You launched your build and watched what happened.

N64
:
Everything just kinda works. For the most part, it was fast and flexible. You never felt like you were utilizing it well. But, it was OK because your half-assed efforts usually looked better than most PS1 games. Each megabyte on the cartridge cost serious money. There was a debugger, but the debugger would sometimes have completely random bugs such as off-by-one-errors in the type determination of the watch window (displaying your variables by reinterpreting the the bits as the type that was declared just prior to the actual type of the variable —true story).

Dreamcast:
The CPU was weird (Hitatchi SH-4). The GPU was weird (a predecessor to the PowerVR chips in modern iPhones). There were a bunch of features you didn’t know how to use. Microsoft kinda, almost talked about setting it up as a PC-like DirectX box, but didn’t follow through. That’s wouldn’t have worked out anyway. It seemed like it could be really cool. But man, the PS2 is gonna be so much better!

PS2
:
You are handed a 10-inch thick stack of manuals written by Japanese hardware engineers. The first time you read the stack, nothing makes any sense at all. The second time your read the stack, the 3rd book makes a bit more sense because of what you learned in the 8th book. The machine has 10 different processors (IOP, SPU1&2, MDEC, R5900, VU0&1, GIF, VIF, GS) and 6 different memory spaces (IOP, SPU, CPU, GS, VU0&1) that all work in completely different ways. There are so many amazing things you can do, but everything requires backflips through invisible blades of segfault. Getting the first triangle to appear on the screen took some teams over a month because it involved routing commands through R5900->VIF->VU1->GIF->GS oddities with no feedback about what your were doing wrong until you got every step along the way to be correct. If you were willing to do twist your game to fit the machine, you could get awesome results. There was a debugger for the main CPU (R5900). It worked pretty OK. For the rest of the processors, you just had to write code without bugs.

GameCube
:
I didn’t work with the GC much. It seems really flexible. Like you could do anything, but nothing would be terribly bad or great. The GPU wasn’t very fast, but it’s features were tragically underutilized compared to the Xbox. The CPU had incredibly low-latency RAM. Any messy, pointer-chasing, complicated data structure you could imagine should be just fine (in theory). Just do it. But, more than half of the RAM was split off behind an amazingly high-latency barrier. So, you had to manually organize your data in to active vs bulk. It had a half-assed SIMD that would do 2 floats at a time instead of 1 or 4.

PSP
:
Didn’t do much here either. It was played up as a trimmed-down PS2, but from the inside it felt more like a bulked-up PS1. They tried to bolt-on some parts to make it less of a pain to work with, but those parts felt clumsy compared to the original design. Having pretty much the full-speed PS2 rasterizer for a smaller resolution display meant you didn’t worry about blending pixels.

Xbox:

Smells like a PC. There were a few tricks you could dig into to push the machine. But, for the most part it was enough of a blessing to have a single, consistent PC spec to develop against. The debugger worked! It really, really worked! PIX was hand-delivered by angels.

Xbox360:

Other than the big-endian thing, it really smells like a PC —until you dug into it. The GPU is great —except that the limited EDRAM means that your have to draw your scene twice to comply with the anti-aliasing requirement? WTF! Holy Crap there are a lot of SIMD registers! 4 floats x 128 registers x 6 registers banks = 12K of registers! You are handed DX9 and everything works out of the box. But, if you dig in, you find better ways to do things. Deeper and deeper. Eventually, your code looks nothing like PC-DX9 and it works soooo much better than it did before! The debugger is awesome! PIX! PIX! I Kiss You!

PS3:
A 95 pound box shows up on your desk with a printout of the 24-step instructions for how to turn it on for the first time. Everyone tries, most people fail to turn it on. Eventually, one guy goes around and sets up everyone else’s machine. There’s only one CPU. It seems like it might be able to do everything, but it can’t. The SPUs seem like they should be really awesome, but not for anything you or anyone else is doing. The CPU debugger works pretty OK. There is no SPU debugger. There was nothing like PIX at first. Eventually some Sony 1st-party devs got fed up and made their own PIX-like GPU debugger. The GPU is very, very disappointing… Most people try to stick to working with the CPU, but it can’t handle the workload. A few people dig deep into the SPUs and, Dear God, they are fast! Unfortunately, they eventually figure out that the SPUs need to be devoted almost full time making up for the weaknesses of the GPU.
Sorry for the format it was the only way

wut? your enter key isn't working?



 

Face the future.. Gamecenter ID: nikkom_nl (oh no he didn't!!) 

NiKKoM said:

Sorry for the format it was the only way

wut? your enter key isn't working?


Did that a million times as soon as I post it, the whole thing goes back together.

Thanks for the copy though, I just copy pasted what you had and it came out right.



spurgeonryan said:
Where is his Wii description?

Read the gamecube part but at 1.5x the speed. 

 

 

Thank you everybody, thank you, now for my next joke..



spurgeonryan said:
Where is his Wii description?


Here:


GameCube
:
I didn’t work with the GC much. It seems really flexible. Like you could do anything, but nothing would be terribly bad or great. The GPU wasn’t very fast, but it’s features were tragically underutilized compared to the Xbox. The CPU had incredibly low-latency RAM. Any messy, pointer-chasing, complicated data structure you could imagine should be just fine (in theory). Just do it. But, more than half of the RAM was split off behind an amazingly high-latency barrier. So, you had to manually organize your data in to active vs bulk. It had a half-assed SIMD that would do 2 floats at a time instead of 1 or 4.


GameCube
:
I didn’t work with the GC much. It seems really flexible. Like you could do anything, but nothing would be terribly bad or great. The GPU wasn’t very fast, but it’s features were tragically underutilized compared to the Xbox. The CPU had incredibly low-latency RAM. Any messy, pointer-chasing, complicated data structure you could imagine should be just fine (in theory). Just do it. But, more than half of the RAM was split off behind an amazingly high-latency barrier. So, you had to manually organize your data in to active vs bulk. It had a half-assed SIMD that would do 2 floats at a time instead of 1 or 4.



Around the Network

Interesting. Imagine if 360 had more RAM and PS3 had a better GPU with SPU debuggers day 1. Holes would be burnt through floors.

I know originally Sony wanted a dual Cell CPU and no GPU but everyone says that wouldn't have worked out. What would it have taken to pull that off? Could a PS4 use a low end GPU but utilize Cell processors (now that they are smaller and cheaper) to create a beast or is it not possible. It sounds like with proper debuggers the PS3 isn't that hard to work with except for the GPU SPE issue. When PS3 is clocking 3.2 ghz, a PS4 running the same but with more CPUs SPEs and a GPU that can handle what developers need sound alright. Of course I don't know what I'm talking about and perhaps no matter how many Cell cores can be utilized it won't stack up against a Power 7? Is it just diminishing returns on Cell? Should cell stick to cloud computing?



Before the PS3 everyone was nice to me :(

VicViper said:
spurgeonryan said:
Where is his Wii description?


Here:


GameCube
:
I didn’t work with the GC much. It seems really flexible. Like you could do anything, but nothing would be terribly bad or great. The GPU wasn’t very fast, but it’s features were tragically underutilized compared to the Xbox. The CPU had incredibly low-latency RAM. Any messy, pointer-chasing, complicated data structure you could imagine should be just fine (in theory). Just do it. But, more than half of the RAM was split off behind an amazingly high-latency barrier. So, you had to manually organize your data in to active vs bulk. It had a half-assed SIMD that would do 2 floats at a time instead of 1 or 4.


GameCube
:
I didn’t work with the GC much. It seems really flexible. Like you could do anything, but nothing would be terribly bad or great. The GPU wasn’t very fast, but it’s features were tragically underutilized compared to the Xbox. The CPU had incredibly low-latency RAM. Any messy, pointer-chasing, complicated data structure you could imagine should be just fine (in theory). Just do it. But, more than half of the RAM was split off behind an amazingly high-latency barrier. So, you had to manually organize your data in to active vs bulk. It had a half-assed SIMD that would do 2 floats at a time instead of 1 or 4.

Ahh, it's funny because it's true.

:|



Before the PS3 everyone was nice to me :(

so Microsoft knows how to give developers what they want Sony head quarters don't and World Wide studios step in and save the day, Nintendo get pretty close to being ignored and Sega is a alien that only Microsoft could have understood.

does that sum it up. did I miss anything important?
good job Microsoft. WTF sony :(

very interesting read.



correct me if I am wrong
stop me if I am bias
I love a good civilised debate (but only if we can learn something).

 

Very interesting.



Sony has a weird way of doing their save files.

They put it in some wierd format that makes it nearly impossible to transfer my Skyrim save file from PS3 to PC.



 Been away for a bit, but sneaking back in.

Gaming on: PS4, PC, 3DS. Got a Switch! Mainly to play Smash