soccerdrew17 said:
u dont know moore's law do u? if ninty were to get back on it (basing this on the gamceube, because the wii is not on moore's law) and release in 2010 (yes, thats early), it would about 2-4 times as strong as the ps3 or 360. since im assuming the gc was about 18 months behind the tech curve (xbox level in other words. made for profit not just speed) and it will release in 2013 (i think this will be a long gen). it would be 32x as strong as ps3 or 360. and then it wouldbe stronger since it would have to match the next gen consoles. moore's law: the processor speed will double every 18 months. its held true for a long time and is expected to hold for a minimum of 10 more years, barring a breakthrough. |
Moore's laws states that the number of transistors that can be put on a chip inexpensively roughly doubles every two years. You have no idea what you are talking about, do you?
Moore's law is an axiom that has proven surprisingly accurate, but it is by no means an actual law. It doesn't particularly deal with the "power" of a CPU, nor does it even deal with clockspeed.
Before you go on some long rant about the powers of consoles, please read into the terms you are referring to. Its non-sensical to talk about processors in terms of speeds relationally to another, and even more so when you are trying to compare CPUs of vastly different architectures, instruction sets, and design goals. Saying that x processor must be y times more powerful than z processor because it is newer ignores reality. It ignores acceptable power envelope, it ignores price, it ignores manufacturer capabilities, it ignores any plausible effect that processor manufacturing deals with. Processors are not made to be powerful, they are made to solve the problem at hand. I don't even know how you are judging relative performance (please god tell me you aren't just going by Moore's law) but it is irrelevant in the console market. SPEC marks, flops, mips, all of those are meaningless when the graphical prowess of the consoles has so much to do with programming, and the processing power only sets a very wide range of possibilities.
As for Xenon and Cell, they were designed for two completely different processes. Cell is designed for SIMD instructions and is pretty good at floating point, but is pretty ugly at integer functions. It benefits from very linear instructions, and therefore is fine with a high latency, low bandwidth ram, such as XDRAM. Since it is in order, it can get away with very little L2 cache, as it does little in saving instructions to be inserted later into the pipeline. That being said, cache coherency on CELL is nearly impossible, and if you do need to recall an instruction - you're screwed.
Xenon is a pretty simple design all in all. Take a relatively simple design of a PowerPC processor, add SMT, speed it up and shrink with a crossbar for intercore communication with a decently sized cache, and viola - you have Xenon. Its a much more balanced processor, with an advantage in cache coherency (though it still is a bitch), integer processing, and in general its out of order nature makes it superior for games, which are not all that predictable.
As for the comment about the system being better when the processor isn't running - well, no that's hyperbole. Sure the XGPU would have more bandwidth and it would be easier for memory management, but as a whole, its absurd to believe that there would be no need for integer processing (see physics and AI). Besides that, it is sort of a pain in the ass to program a GPU, considering its very limited instruction set, in order nature, and the fact that the non-existant cache means that any kind of instruction must be redone every time it hits the processor.
As for the comment "count them as even" - that's probably correct, but for none of the reasons you stated. The reality is that whoever the engine designer is will not take the most intricate route and pull out every single corner case they can to gain extra performance out of each architecture, at the expense of easy portability. There is only so much that developers are willing to do, and that would cross the threshold. As it is, in theory the PS3 has more raw horsepower, but drawing it out is damn near impossible, due to the absolutely terrible compilers and the fact that the architecture was not created with gaming in mind. This is important to keep in mind, as raw power is meaningless if it is impossible to tap. The gamecube arguably had the game with the best graphics last gen (RE4), despite a theoretically less powerful processor than the xbox cpu.










