joeorc said:
an no my example is not false..reason being IBM created the chips they have done real world test's. does it mean the performance of engines on the xbox360 cannot attain higher performance. no it does not but on the same token that same thing can be said about the PS3..its the software engine, the technology it is just a rough guide what could be able to attain. will they reach their maximum's ..prob not . but it does not take away from the fact that IBM stated what they did due to the test's they have done on both systems processor's. once again the API's was not as well developed in 2005, because the Cell was just unvailed in 2004 that's like saying the engine will stay the same for any processor..yea unless you tweak the engine which we all know EPIC, or other developer's do not tweak their engine's example: you just stated this: @selnor "You cannot forget when Cell is doing Graphics it CAN'T be doing normal CPU work. So devs have to be careful how much CPU work time they take away from Cell." yes it can..and that's where you are going about it all wrong. "staude" pointed this very same thing out to you in this very same thread. what you relate to PC programming is not the way you look at these type's of local store embedded cpu core's and how they can be developed on. "you can" but your result's will be reduced . this is more about memory management than about trying to rely on just a large pool of ram to do everything from. it's a much more indepth precise way of development with this type of design. because there is more seperate core's to manage not just on memory, but also what each core will need to do in any clock cycle. but that also has problems of it's own: ever heard of "differential signaling" example : The realism of today’s games, though, demands far more number crunching than the CPU alone can deliver. That’s where the graphics processing unit, or GPU, comes in. Every time an object has to be rendered on screen, the CPU sends information about that object to the GPU, which then performs more specialized types of calculations to create the volume, motion, and lighting of the object. the PS3, the Cell and the RSX are connected by a Rambus interface technology, which, sure enough, Rambus has given a catchy name—FlexIO. The total bus width between the two chips is 7 bytes: 4 bytes to move data from the Cell to the RSX, 3 to move data in the opposite direction. This setting gives a bandwidth of 20 GB/s outbound from the Cell and 15 GB/s inbound—almost 10 times as fast as PCI Express, an interface standard popular in today’s PCs. Thanks to FlexIO, the Cell processor can fling an incredibly large number of triangles to the RSX, and the result is more details and more objects in 3-D games. Future Gaming consoles will continue to demand ever-faster buses, but how much bandwidth is enough will vary from system to system. For instance, one of PlayStation 3’s main competitors, Microsoft’s Xbox 360, released last year, relies on a CPU-GPU bus with a bandwidth of 21.6 GB/s, half in each direction. It’s a proprietary interface developed by IBM that runs at 5.4 GHz and relies on differential signaling to maintain signal integrity. It may not be as fast as PS3’s, but Xbox 360 owners don’t seem disappointed. so as you can see , it not just about the Cell, or the xenon or their respective GPU's its about the system as a whole and your article failed to even go into that part of his take on each system. Now like i said it's not about his take it's about its Relevance today.
|
I agree with you 100% here. It is all about the whole system which is exactly why they are so close in overall ability.
The cell helps out the RSX alot as the visuals get more fidelity. I didnt mean that the Cell cant do anything else if it's doing graphical calculations, but it will have less power available to do what normal CPU operations are.
Whereas the 360 has a more traditional setup, it has got a vastly more powerful GPU. So you will never need to have a CPU do graphical calculations to get the same game from 360. In fact the Xenos GPU will take awhile to learn to programme right as it is vastly new and different than normal GPU's. ( Or certainly was in 2006 ). Not only is it's architecture completely redefining GPU's but ATI also had a hand in designing the overall memory of the 360.
Once you learn that the Xbox 360 GPU also acts as the system’s memory controller, much like the Northbridge in an Intel PC, the picture becomes a bit clearer. ATI has been making and designing chipsets for a good while now that use GDDR3 RAM. Add to this that Joe Macri (go cart racing fiend extraordinaire), who was a pivotal factor in defining the GDDR3 RAM specification at JEDEC and is also a big fish at ATI, and it only makes sense that ATI could possibly put together one of the best GDDR3 memory controllers in the world. So while it might seem odd that the Xbox 360 Power PC processor is using “graphics” memory for its main system memory and a “GPU” as the “northbridge,” once you see the relationship between the three and the technology being used it is quite simple. Therefore, we have the 700MHz GDDR3 RAM acting as both system RAM and as GPU RAM, connected to the GPU via a traditional GDDR3 bus interface that can channel an amazing 25 Gigabytes per second of data.
Smart 3D Memory is the biggest standout and innovative feature I saw inside the entire Xbox 360. To give you an idea of what it would look like first hand, think of any normal GPU you might see, something much like this Mobility Radeon X700 chipset. The X700 is pretty much what any modern GPU looks like. Now think of that same chipset as having a single piece of DRAM sitting off to one side, much like you can see in this ATI slide below, but with one less piece of RAM (and no arrows).

Keep in mind, ATI is not a stranger to adding memory to a chipset, but remember that this is “smart” memory.
The Xbox 360
Smart 3D Memory is a relatively small piece of DRAM sitting off to the side of the GPU but yet on the same substrate. The Smart 3D Memory weighs in at only 10MB. Now the first thing that you might think is, “Well what the hell good is 10MB in the world of 512MB frame buffers?” And that would be a good line of questioning. The “small” 10MB of Smart 3D memory that is currently being built by NEC will have an effective bus rate between it and the GPU of 2GHz. This is of course over 3X faster that what we see on the high end of RAM today.
Inside the Smart 3D Memory is what is referred to as a 3D Logic Unit. This is literally 192 Floating Point Unit processors inside our 10MB of RAM. This logic unit will be able to exchange data with the 10MB of RAM at an incredible rate of 2 Terabits per second. So while we do not have a lot of RAM, we have a memory unit that is extremely capable in terms of handling mass amounts of data extremely quickly. The most incredible feature that this Smart 3D Memory will deliver is “antialiasing for free” done inside the Smart 3D RAM at High Definition levels of resolution. (For more of just what HiDef specs are, you can read about it here. Yes, the 10MB of Smart 3D Memory can do 4X Multisampling Antialiasing at or above 1280x720 resolution without impacting the GPU. Therefore, not only will all of your games on Xbox 360 be in High Definition, but they also will have 4XAA applied.
The Smart 3D Memory can also compute Z depths, occlusion culling, and also does a very good job at figuring stencil shadows. Stencil shadows are used in games that will use the DOOM 3 engine such as Quake 4 and Prey.
Now remember that all of these operations are taking place on the Smart 3D Memory so they will have very little impact on the workload GPU itself. You may now be asking yourself what exactly the GPU will be doing.
First off, we reported on page 2 in our chart that the capable “Shader Performance” of the Xbox 360 GPU is 48 billion shader operations per second. While that is what Microsoft told us, Mr. Feldstein of ATI let us know that the Xbox 360 GPU is capable of doing two of those shaders per cycle. So yes, if programmed correctly, the Xbox 360 GPU is capable of 96 billion shader operations per second. Compare this with ATI’s current PC add-in flagship card and the Xbox 360 more than doubles its abilities.
Now that we see a tremendous amount of raw shader horsepower, we have to take into account that there are two different kinds of shader operations that can be programmed by content developers. These are the vertex and pixels shaders. These are really just what they sound like. Vertex shader operations are used to move vertices, which shape polygons, which make up most objects you, see in your game, like characters, buildings, or vehicles. Pixel shader operations dictate what groups of pixels do like bodies of water or clouds in the sky, or maybe a layer of smoke or haze.
In today’s world of shader hardware, we have traditionally had one hardware unit to do pixel shaders and one hardware unit to do vertex shaders. The Xbox 360 GPU breaks new ground in that the hardware shader units are intelligent as well. Very simply, the Xbox 360 hardware shader units can do either vertex or pixel shaders quickly and efficiently. Just think of the Xbox 360 shaders as being analogous to SIMD shader units (Single Instructions carried out on Multiple Data).
The advantage of this would not be a big deal if every game were split 50/50 in terms of pixel and vertex shaders. That is not the case though. While most games are vertex shader bottlenecked, some others are pixel shader bottlenecked this is impossible to get bottlenecked on the Xbox 360.
So although they both have different methods, you end up with a very close overall capability. And this is in a large way down to ATI's influence on memory and brand new tech in the GPU.








..come on man your reachin.