By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - My gripe with all console manufacturers using AMD powered graphics ...

 

What do you think ?

Agree ? 18 22.50%
 
Disagree ? 31 38.75%
 
WTF ? 31 38.75%
 
Total:80
fatslob-:O said:
ICStats said:
Last year nVidia chips used too much power. The 7790 which is I think is closest to what's in the consoles, was at least 1.5X more power efficient than equivalent nVidia GPUs, until this year.

I was pretty sure that kepler was more efficient perf/watt wise in terms of gaming performance ...

No, not last year when the hardware for XB1/PS4 would have been pretty fixed.

Radeon HD 7790
 - Released March 2013, MSRP $130, die size 221 sqmm, TDP 85W, GFLOPS 1792, GFLOPS/W 21.0

GeForce GTX 650 Ti Boost
 - Released March 2013, MSRP $170, die size 160 sqmm,  TDP 134W, GFLOPS 1505, GFLOPS/W 11.2

Comparatively the GTX was ~40% bigger, used a lot more power, and cost more.

The GTX was slightly faster in real gaming, but slower in some compute tasks.



My 8th gen collection

Around the Network
ICStats said:
fatslob-:O said:
ICStats said:
Last year nVidia chips used too much power. The 7790 which is I think is closest to what's in the consoles, was at least 1.5X more power efficient than equivalent nVidia GPUs, until this year.

I was pretty sure that kepler was more efficient perf/watt wise in terms of gaming performance ...

No, not last year when the hardware for XB1/PS4 would have been pretty fixed.

Radeon HD 7790
 - Released March 2013, MSRP $130, die size 221 sqmm, TDP 85W, GFLOPS 1792, GFLOPS/W 21.0

GeForce GTX 650 Ti Boost
 - Released March 2013, MSRP $170, die size 160 sqmm,  TDP 134W, GFLOPS 1505, GFLOPS/W 11.2

Comparatively the GTX was ~40% bigger, ~60% hotter, and cost more.

The GTX was slightly faster in real gaming, but slower in some compute tasks.

I was comparing the hd 7970 to the gtx 680. Performance is more than GFLOPS! I'm talking about GAME performance. Despite the fact that the 680 and 7970 are roughly neck and neck the 680 still takes less power. 



fatslob-:O said:
ICStats said:
fatslob-:O said:
ICStats said:
Last year nVidia chips used too much power. The 7790 which is I think is closest to what's in the consoles, was at least 1.5X more power efficient than equivalent nVidia GPUs, until this year.

I was pretty sure that kepler was more efficient perf/watt wise in terms of gaming performance ...

No, not last year when the hardware for XB1/PS4 would have been pretty fixed.

Radeon HD 7790
 - Released March 2013, MSRP $130, die size 221 sqmm, TDP 85W, GFLOPS 1792, GFLOPS/W 21.0

GeForce GTX 650 Ti Boost
 - Released March 2013, MSRP $170, die size 160 sqmm,  TDP 134W, GFLOPS 1505, GFLOPS/W 11.2

Comparatively the GTX was ~40% bigger, ~60% hotter, and cost more.

The GTX was slightly faster in real gaming, but slower in some compute tasks.

I was comparing the hd 7970 to the gtx 680. Performance is more than GFLOPS! I'm talking about GAME performance. Despite the fact that the 680 and 7970 are roughly neck and neck the 680 still takes less power. 

"Performance is more than GFLOPS! I'm talking about GAME performance."

I did explicitly say the GTX was slightly faster in real gaming...

"680 and 7970 are roughly neck and neck the 680 still takes less power."

The HD 7970 is older tech from 2012, Southern Islands architecture and was a huge chip that took tons of Watts.

The HD 7790 is the Sea Islands (closer to PS4) architecture, and had better performance per watt than equivalent Kepler GPUs from what I could see.

You can easily see that every chip in the HD 7000 and GTX 600 range has a different GFLOPS/W, depending on clock settings, type and amount of RAM, number of cores, etc.  That's why I'm comparing some GPUs that have closer specs to the PS4/XB1.

FWIW I'm not like an AMD fanboy... I have a GTX 670 in one PC and HD 7850 in another.  They are not so comparable in performance though.



My 8th gen collection

Eddie_Raja said:
Pemalite said:
GProgrammer said:
With my Graphics Programmer cap on

Tessellation is way down the list of useful GFX tech


I disagree.
Geometry is going to play a massive role this coming generation.
is much more scalable.


Disagree all you want.  Tessalation is way over done in Nvidia games to make AMD cards look bad.  Play BF4.  Highest poligon counts you will see, and guess what?  AMD card perform way better...

 

AMD cards are just fine at high polygon counts as long as you aren't deliberatily trying to sabotage the competition. (It's not like Nvidia is known to do that /s)


Battlefield 4 isn't exactly "pushing" Tessellation all that hard, neither did Battlefield 3 really, not in a direction that many people expect anyhow.
Besides, AMD's drivers by default profile tessellation factors depending on the game and set a setting that compromises some quality for speed, so it's not always transparent.

Unigen Heaven, that kind of geometric detail is probably what many are hoping for this generation, it can look pretty fricken awesome even on modest hardware several years old.


Certainly more impressive than bump mapped/flat ground that we are all used to.
It's simply going to take time for game engines, programmers to catch up, they have been stuck in last generation land for to long.

lt_dan_27 said:


I think I'll agree with the graphics programmer on this one. 

Good thing I have done "Graphics programming" before too then.
I wrote shaders for Oblivion and Fallout 3 (In order to make them run on Xbox 1 class graphics hardware), made a 2D sprite-based game with some hardware accellerated framebuffer effects (Still not finished, one day!).
And if I go back to when I was only a child, I wrote my first game in Beginners-All Purpose Symbolic Instruction Code (BASIC) on the commodore 64, which was an ASCII based game where the goal was to fly a plane and not crash it.

fatslob-:O said:

Why would anyone want to store the generated vertex's ? The point of tessellation is to create more detail in a procedural way. The endless city demo wouldn't even be possible in the first place because of large memory overhead in storing over a billion triangles! Your reasons for tessellation being superior in the xbox one is wrong for the most part seeing as how the performance scales with CLOCKS which is quite pathetic on AMD's part. How in the hell does a 7870 kick's 7950 in the ass in tessmark ? This still means that even with the higher clocks the xbox one would only perform 5% better than the PS4 in tessmark and if we go to tess factors of under 16 the PS4 will most likely take it due to the fact that somehow their tessellators would actually respond to those factors. 

What AMD needs to seriously do is have a truly parallel solution otherwise it's going to be another embarrassing slaughter on the tessellation front. 

You should ask AMD that same very question. :P

The problem is, the geometry engines need large caches to store data to keep everything fed, when AMD moved from a single geometry engine to having two of them it didn't increase the caches as it was transister and thus die-size constrained at the 40nm fabrication process. (I.E. Radeon 69xx series.)
So their only option was to store it into System Ram or the GPU's Ram, the GPU's Ram was it, hence why in some games AMD's GPU's may use more memory than nVidia to run the same game on those particular cards.

But I do iterate I haven't actually looked into AMD's tessellation improvements with the GCN hardware to any great degree, so I'm not sure if the above still applies.



--::{PC Gaming Master Race}::--

Pemalite said:
fatslob-:O said:

Why would anyone want to store the generated vertex's ? The point of tessellation is to create more detail in a procedural way. The endless city demo wouldn't even be possible in the first place because of large memory overhead in storing over a billion triangles! Your reasons for tessellation being superior in the xbox one is wrong for the most part seeing as how the performance scales with CLOCKS which is quite pathetic on AMD's part. How in the hell does a 7870 kick's 7950 in the ass in tessmark ? This still means that even with the higher clocks the xbox one would only perform 5% better than the PS4 in tessmark and if we go to tess factors of under 16 the PS4 will most likely take it due to the fact that somehow their tessellators would actually respond to those factors. 

What AMD needs to seriously do is have a truly parallel solution otherwise it's going to be another embarrassing slaughter on the tessellation front. 

You should ask AMD that same very question. :P

The problem is, the geometry engines need large caches to store data to keep everything fed, when AMD moved from a single geometry engine to having two of them it didn't increase the caches as it was transister and thus die-size constrained at the 40nm fabrication process. (I.E. Radeon 69xx series.)
So their only option was to store it into System Ram or the GPU's Ram, the GPU's Ram was it, hence why in some games AMD's GPU's may use more memory than nVidia to run the same game on those particular cards.

But I do iterate I haven't actually looked into AMD's tessellation improvements with the GCN hardware to any great degree, so I'm not sure if the above still applies.

So I took a bit of snooping on the GCN whitepapers and the only thing that you would need to store is the patch data according to microsoft and that is relatively small. Another thing that the GCN architecture does to alleviate this storage bottleneck is that the patch data can spill to the L2 cache so I'm willing to bet that the storing of patch data isn't as much of a problem as AMD's own implimentation of tessellation. What AMD DOESN'T currently have is the hardware to maintain the exponential increase of triangles. Instead of just having 2 tessellators that are outside of the compute units or streaming multiprocessors maybe it would be a better idea to have tinier tessellation engines in those units. Even if each tessellator doesn't have 1 prim/clk and only does 0.25 prim/clk it would be alot easier to handle the explosion of triangles on 8 or so smaller tessellators rather than just 2 tessellators. 

AMD needs to seriously figure out a truly parallel solution. Their pipeline is serialized from the looks of it and that's probably causing the bottleneck. 

/Edit

Off-topic: Now I know why we don't use tile based renderers ... The system scales like a bitch once you have more than 512 primitives per each 16 x 32 tiles. Eventually the benefits of z buffering such as being able to do stencil shadowing starts to outweigh the drawbacks of a steep slope expense. Tiled based rendering wouldn't be feasible anymore on current gen games that are coming out and it would likely have a much more difficult relationship with tessellation than AMD now which is already bad. 



Around the Network

Honestly I have never had a problem with AMD or their drivers. And for the price of AMD cards you can't really go wrong.



dudeitsminion said:
Honestly I have never had a problem with AMD or their drivers. And for the price of AMD cards you can't really go wrong.

I agree which is why I'm using an AMD card but I want them to improve on something. 



Pemalite said:
GProgrammer said:
With my Graphics Programmer cap on

Tessellation is way down the list of useful GFX tech


I disagree.
Geometry is going to play a massive role this coming generation.

No it wont

In all my years as a professional game developer, I've never seen an application vertex setup bound.

Yes in the 20th century this used to happen but not for the last 15 years

 

Tesselation only makes sense if you couple it with a displacement map



GProgrammer said:
Pemalite said:
GProgrammer said:
With my Graphics Programmer cap on

Tessellation is way down the list of useful GFX tech


I disagree.
Geometry is going to play a massive role this coming generation.

No it wont

In all my years as a professional game developer, I've never seen an application vertex setup bound.

Yes in the 20th century this used to happen but not for the last 15 years

 

Tesselation only makes sense if you couple it with a displacement map

You can't say that tessellation is NOT important. It could actually be the key to making games ultra detailed like the precursor of the geoverse demos. 



Theres a lot of talk about the performance difference between AMD and Nvidia but I think the reality is Nvidia are a very poor company to work with. Microsoft had big issues with Nvidia demanding high royalties even when the original xbox was near the end of its life. Microsoft had to abandon the xbox early and it probably led to the huge disaster of RROD as the 360 was released too soon. Since the original xbox we now have two generations of xbox console using AMD graphics. Originally paired with powerpc but now with AMD's own CPU.

Sony also had issues with nvidia and again has moved away from them.

Nintendo again has gone the AMD route again.

The fact is AMD are a far better company to work with.

Ouya is about the only console manufacturer willing to work with nvidia nowadays....