| HappySqurriel said: Being that I'm a fairly good professional software developer with a degree in Pure Mathematics (with a focus on projective geometry and linear algebra) and another degree in Computer Science (with a focus on Computer Graphics) I think I have a VERY GOOD IDEA of what I am talking about ... The fact of the matter is that most people have unrealistic expectation of improvements that are possible due to how far games came on the Playstation and PS2. Back in the early days of 3D graphics, most developers had a very poor understanding of the data structures and algorithms of creating a 3D game engine; developers like John Carmack introduced BSP trees, KD-Trees, Octrees, portals and countless more efficient datastructures and algorithms at events like the GDC which drastically improved games. With the PS2 developers finally started to take advantage of licenced game engines and middleware which (essentially) meant that they began to leverage the skills of far better developers to take advantage of the processing power of the console. Most developers today are already leveraging licenced middleware which is being produced by a collection of the most educated and experienced 3D programmers in the world which means they're already getting really good performance out of these systems. As time goes on and their code becomes more optimized you will see improvements, but it is more likely that the performance increases will be used to produce graphics on the level of Lair at a decent framerate ...
The comment on "Running out of time" was simple on purpose ... If you offered Sony/Microsoft the ability to double the number of cores in their processors, take advantage of a smaller process and increase their clock speed, or add a physics co-proccessor (all with no added cost to the system) they would have taken advantage of this technology; this is what I mean by they didn't choose the processing power of their system. |
Congratulations, this post is a lot more intelligent than your previous one. You almost had me fooled there... Moving on, I'm afraid I'm missing the point of the majority of your post. I'm assuming you're saying that the brightest minds are developing middleware, so developers can spend the majority of resources on the actual development of the game - resulting in only minor performance improvements.
Middleware helps no one if it itself isn't optimized for a certain system. The recent issues developers had with UE3 for the Playstation 3 is an example of this. Not to mention that even with the same engine a game could look either like a last-gen game or an impressive on for todays standards. Again, I'm not sure what you were getting at. This could probably be due to my limited comprehension skills though... 
There's nothing indicating that the future visual improvements of both the Xbox 360 and Ps3 will be minor. Especially in the Playstation 3's case.
Your last paragraph doesn't make much sense though...
Wouldn't any company, including Nintendo? The thing is, both companies were developing the components for their respective systems well in advance, whether is was the Cell processor, FlexIO bus, Xenos, Xenon, RSX or anything else. Such undertakings are very difficult as everything has to work together in a closed architecture and have to meet many criteria which directly relate to price and heating. I'm sorry but no matter how you spin this, the statement you made in your original post just doesn't make any sense - at the very least not to me.







