Andir said: Sqrl said: @Griffin,
Do you understand the relationship between timings and clock speed for memory? If you do, you should have mentioned that while 3.2Ghz is very fast indeed it only refers to part of the equation. Information about memory timing, especially CAS Latency is crucial to fully understanding the speed of the memory.
Just to explain a bit for folks:
Memory Latency is the amount of time that passes between data being requested from the memory and receipt of the first bit. Memory Clock Speed is essentially how quickly new bits from the same address are recieved.
Hopefully it should be clear that the use for the memory is important in deciding what kind of memory you need. In the case of games a great deal of the memory usage is consumed by small variables tracking a great many minor things. There are also a fair bit of larger files to be fair, but their use is much less frequent making their impact less also.
Now, please realize I am not disagreeing with your assessment that PS3 memory is indeed the faster of the two. I am just trying to make sure that the entire situation is understood before people run off and think they know something based on incomplete information. |
And that works both ways. Nobody knows the "dirty" specs for the 360, Wii or PS3 memory (that I can find.) Though with XDR memory, the latency is usually pretty low. The GDDR3 that is used for the RSX is most likely "run of the mill" comparable to most nVidia utilized memory. If I were to guess, I'd say the memory in the 360 is probably the same. The specs we do know: PS3: -- 256MB XDR @ 3.2GHz -- <-> to/from Cell @ 25.6GB/s -- Cell @ 3.2GHz -- ~200GB/s EIB that connect the PPU to 7 SPUs -- Interconnect @ 35GB/s -- <- to Cell @ 20GB/s from RSX -> from Cell @ 15GB/s to RSX -- RSX @ 550MHz -- 8 Vertex shader pipelines 24 Pixel shader pipelines 137 shader ops per cycle (24x5 ALU + 8x2 ALU) 100 billion shader operations per second 400-750 million polygons per second *400 triangles (up to 750 using strips, etc.) 4.4 GigaPixel per second fill rate -- 256MB GDDR3 @ 700MHz -- <-> to/from RSX @ 22.4GB/s 360: -- 512MB GDDR3 @ 700MHz -- <-> to/from the Xenos GPU @ 22.4GB/s -- Xenos @ 500Mhz -- 48 Unified shader pipelines 2 shader ops per cycle (2 ALU per pipeline) 48 billion shader operation per second 500 million triangles per second *not polygons 4 GigaPixel per second fill rate <-> On die dedicated memory "logic" controller @~32GB/s <-> connected to 10MB On die memory @ 256GB/s *essentially "free" AA up to 720p. -- Interconnect -- <- to Xenos @ 10.8GB/s from CPU -> from Xenos @ 10.8GB/s to CPU -- CPU @ 3.2GHz -- 3 Core General Processing CPU |
I don't think I was ambigous about my post at all. It should be clear to anyone who is going to be able to understand the info that I meant all memory. I'm just trying to figure out why you felt it necessary to respond by saying it applies to the 360 also and then quote specs.
Don't take me the wrong way here, I'm not trying to be condecending or rude, I'm just trying to make sure I understood the point of your post and that I didn't miss a bigger point you were trying to make.
@MikeB,
While I appreciate, and agree to a large extent, that this guy misstates a number of things. This whole idea that you can simply look at a single game on a console and understand whether or not the console has development issues is just as silly and innacurate as the things said in this article.
This type of issue isn't one that is clear cut where you can just point to a single example and say that it is true for all scenarios. Different games need to be dealt with in different ways and often times that can completely change the way the console's resources are utilized and managed.