By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - What can we expect of console graphic hardware performance in 2012?

There are many ways to measure graphic hardware performance, but the two most easily available and well documented metrics are texel fill rate and bandwidth.   While both of these measurements are not as relevant as they used to be, they never the less continue to be a significant representation of performance progress.   So I compiled a few numbers based on the Radeon series of graphic cards and have my Excel sheet kicks out the following two graphs. 

In 2005 we had graphics hardware doing about 8 Giga Texel/sec.   2008 we have top line dual core ATI doing about 64 GT/sec, eight fold increase.  If future developments hug the trend line well, by 2012 texel fill rate should be in the 150GT/sec range, roughly 19 times 2005 hardware.

Any one working for ATI or NVidia can tell me if this is about right or overly optimistic?

 



Around the Network

For reference, does anyone know the texel fill rates of the PS2, Xbox, Gamecube, PS3, 360 and Wii, so we can compare where we are?



I wouldn't expect much. Wii concept has worked so well this gen, I expect all 3 competitors to adopt it for the next gen.



@kvs

texel fill rate for the 360 is on the graph (2nd dot in between 2004-2006) and is about 8Giga Texel per second.  PS3 texel fill rate is about the same since the graphics chip NVidia designed for PS3 was of the same generation (Some may argue that the taken by itselft the PS3 graphics chip is much weaker than what were designed for the 360.  But that is another discussion entirely.). 



I would say Console graphic performance in 2012 is going to be whatever Devs can put out on the 360/PS3 at the time. New consoles that quickly would be death to pretty much the whole software side of the industry.

Now if you are talking straight up GPU specs, then I think you are in the right direction, but it is nearly impossible to say what nVidia or AMD will be able to accomplish in nearly 4 years from now. A new way of processing could be developed, the materials used could change, a new manufactuing meathed could happen, all of which could send a shockwave through the performance of chips. Just think, in 2005 GPU makers were working on 90nm production, now 3 years later they are using 55nm. By 2009 they will probably be down to 32 or even 24nm production, that alone will create a ton of potential.



Stop hate, let others live the life they were given. Everyone has their problems, and no one should have to feel ashamed for the way they were born. Be proud of who you are, encourage others to be proud of themselves. Learn, research, absorb everything around you. Nothing is meaningless, a purpose is placed on everything no matter how you perceive it. Discover how to love, and share that love with everything that you encounter. Help make existence a beautiful thing.

Kevyn B Grams
10/03/2010 

KBG29 on PSN&XBL

Around the Network

PS2 -
16 pipelines at 147 MHz - 2.3 Gigapixels (a theoretical figure completely unrealistic)
GC -
4 pipelines with 1 texture unit at 162 MHz - 644 megapixels/megatexels
Xbox -
4 pipelines with 2 texture units at 233 MHz - 932 megapixels, 1.8 gigatexels (hard to believe IMO)
Wii -
4? pipelines with 1-2? texture units at 243? Mhz - roughly 1 gigapixel
Xbox 360 -
24 shader units, 16 texture units and 8 vertex units at 500 MHz - 4 gigapixels / 8 gigatexels
PS3 -
24 shader units, 24 texture units and 8 vertex units at 550 MHz - 4.4 gigapixels / 13.2 gigatexels

Here.



 

 

 

 

 


Assuming no improvement in shader technology (unlikely), the console hardware that will be availible in 2011 will probably be at 32nm (yes, Intel claims 22nm by then, but consoles are never cutting-edge PC chips). Since the current chips were designed at 90nm, that gives 8x the performance from transistor count alone. So I think console hardware in 2011 could be up to 10x as powerful in terms of raw shader power.

But I think graphics will be much less important, and MS and Sony will choose to have cheaper consoles out of the gate and incorporate their own form of 1:1 motion controls. That will severely reduce the budget for graphics. So I think the next generations' graphics hardware will be about 4x better. I don't think the difference will be very visible.

MS's Xenos with the 10MB smart EDRAM is pretty revolutionary, and I think you'll see that in future consoles. It's too bad MS didn't use more then 10 though. But being on a seperate bus and same chip as the GPU, it can take the huge bandwidth load that anti-aliasing and other processing uses off the main bus.
Next gen consoles are going to launch with a lot less expensive HW I think, so less powerful but multicore gpus I'd think.



Yet, today, America's leaders are reenacting every folly that brought these great powers [Russia, Germany, and Japan] to ruin -- from arrogance and hubris, to assertions of global hegemony, to imperial overstretch, to trumpeting new 'crusades,' to handing out war guarantees to regions and countries where Americans have never fought before. We are piling up the kind of commitments that produced the greatest disasters of the twentieth century.
 — Pat Buchanan – A Republic, Not an Empire

Tyrannical said:
MS's Xenos with the 10MB smart EDRAM is pretty revolutionary, and I think you'll see that in future consoles. It's too bad MS didn't use more then 10 though. But being on a seperate bus and same chip as the GPU, it can take the huge bandwidth load that anti-aliasing and other processing uses off the main bus.
Next gen consoles are going to launch with a lot less expensive HW I think, so less powerful but multicore gpus I'd think.

I think AMD would have used the EDRAM in high-end PC chips if it was a performance enhancer. As it is I think it's a cost saving measure since GDDRx is very expensive and overkill for a low-end GPU like the Xbox 360's.

@bolded: What do you mean? GPUs are already as parallel as possible.

 



@haxxiy

My original post is not meant to say which console GPU is better, rather it is just a projection of what we could be looking at in the future. But I will try to address your misunderstanding on the technical side of thing below.

Now, when I said 360 GPU was more powerful then PS3 GPU this would take at least a 20 pages essay to point out technical intricatecies of why it is so. The PS3 as a whole with the aid of the 7 CELL cores, can on paper do nearly twice the 360 in floating points calculation.

But we are talking just the GPU alone (CELL excluded).

The 360 GPU despite being released 1 year before were actually one generation ahead of the PS3 GPU. Why? Microsoft had asked ATI to spend two years designing a unique from the ground up GPU for the 360, with many novel features for its time. The result was the first unifided shaders GPU on the market. The benefit of unified shaders is that it is more effiecient than traditional shader (up to 30% more so by some experts). Another unique feature that greatly enhanced performance was the 10MB smart memory logic that does 4xAA, and other specials effects for nearly no loss in performance. The 256GB/s band width that is associated with this memory is worth mentioning. Because of this you will see some people claim that the 360 GPU has an effective bandwidth of 278GB/sec.

The importance of this extra bandwidth and the smart memory logic on 360 GPU can not be understated. It is this design decision that helped make nearly every multiplatform game looked smoother around the edges for the 360. While PS3 programmers have to implement all kinds of tricks including reducing resolution to get any kind of acceptable AA performance.

For whatever reasons, not until very late in the system design did Sony commisioned NVIDIA to design a GPU for them. And at that time Sony did not think the blu ray issue would delay the PS3 one year. So effectively NVIDIA had less than a year to design the PS3 GPU. Not enough time to do anything revolutionary so they just took a Geforce 7800GTX and molded it into something that would work with the CELL. The lack of time to design (ATI had nearly two years to do a from the ground up design) severely limited the potential of PS3 GPU, which felt like an afterthough rather than a perfect part of the whole.