By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - My gripe with all console manufacturers using AMD powered graphics ...

 

What do you think ?

Agree ? 18 22.50%
 
Disagree ? 31 38.75%
 
WTF ? 31 38.75%
 
Total:80
Pemalite said:
GProgrammer said:
With my Graphics Programmer cap on

Tessellation is way down the list of useful GFX tech


I disagree.
Geometry is going to play a massive role this coming generation.
is much more scalable.


Disagree all you want.  Tessalation is way over done in Nvidia games to make AMD cards look bad.  Play BF4.  Highest poligon counts you will see, and guess what?  AMD card perform way better...

 

AMD cards are just fine at high polygon counts as long as you aren't deliberatily trying to sabotage the competition. (It's not like Nvidia is known to do that /s)



Prediction for console Lifetime sales:

Wii:100-120 million, PS3:80-110 million, 360:70-100 million

[Prediction Made 11/5/2009]

3DS: 65m, PSV: 22m, Wii U: 18-22m, PS4: 80-120m, X1: 35-55m

I gauruntee the PS5 comes out after only 5-6 years after the launch of the PS4.

[Prediction Made 6/18/2014]

Around the Network

It really doesn't. My 7950 handles it beautifully. Probably because it's not running things at 64x tessellation (which is useless). In real world practical settings, they're basically the same. It makes no sense to use nvidia graphics for a console at this point. AMD's APUs are perfect for consoles. I don't want to pay more money so some curves in video games can be unnoticeably more round.



Pemalite said:
GProgrammer said:
With my Graphics Programmer cap on

Tessellation is way down the list of useful GFX tech


I disagree.
Geometry is going to play a massive role this coming generation.

nVidia stated that the jump from the NV30 (GeForce FX 5800... Aka, Geforce 5.) to the GT200 (GeForce GTX 280... Aka Geforce 10.), the geometry performance of NVIDIA’s hardware only increases roughly by 3x in performance, that is in stark contrast to the shader performance of their cards increasing by over 150x.


I think I'll agree with the graphics programmer on this one. 



Pemalite said:
fatslob-:O said:

Another conclusion that can be made is that even if the X1 only had 1 tessellator compared to the PS4 having possibly 2 tessellators their performance on tessellation factors of 16 and higher would likely be identical for the most part. A better idea of getting more detail rather than using tessellation on AMD GPUs is that passing a higher detailed polygon mesh would have less impact on performance even accounting for the extra calculations of the animations. As for the WII U it's probably using those awful evergreen tessellation engines as you can see from the charts above who's cards are 5770, 5870, and 6850 all of which have MUCH higher clocks than the latte which also diminishes it's already bad tessellation performance.

AMD's design is easily less elegant than nVidia's, I'm not sure if AMD is still using the GDDR5 memory to store the Tessellation data with Graphics Core Next like they did with VLIW4, but that would bring with it, it's own performance penalty. (Higher latency, lower bandwidth.)
If anything, the Xbox One could have the advantage over the Playstation 4 in that regard thanks to the eSRAM, but, we will only know for sure in time, it certainly won't make up for the shader, render output, texture mapping unit deficiencies though.

Essentially, AMD needs to do allot of work to "Load Balance" the geometry, which comes with it... A bunch of bottlenecks, nVidia's Polymoph is much more scalable.

Why would anyone want to store the generated vertex's ? The point of tessellation is to create more detail in a procedural way. The endless city demo wouldn't even be possible in the first place because of large memory overhead in storing over a billion triangles! Your reasons for tessellation being superior in the xbox one is wrong for the most part seeing as how the performance scales with CLOCKS which is quite pathetic on AMD's part. How in the hell does a 7870 kick's 7950 in the ass in tessmark ? This still means that even with the higher clocks the xbox one would only perform 5% better than the PS4 in tessmark and if we go to tess factors of under 16 the PS4 will most likely take it due to the fact that somehow their tessellators would actually respond to those factors. 

What AMD needs to seriously do is have a truly parallel solution otherwise it's going to be another embarrassing slaughter on the tessellation front. 



Eddie_Raja said:
This is one of the biggest fail threads I have ever seen. In most games you can turn down (or off) the tessellation and not tell the difference without looking really hard. AMD cards are classically better at larger textures and AA, which is far more noticeable. 

On top of that AMD was the ONLY option in terms of pricing. 
1) Nvidia's CPU's are awful.
2) Intel's graphics are awful
3) AMD has good CPU's and great graphics.

P.S. The 280X loses to the 270X in your "benchmark." That makes absolutily no sense so good job finding the most non-nonsensical benchmark I have ever seen.

 

Eddie_Raja said:
Pemalite said:
GProgrammer said:
With my Graphics Programmer cap on

Tessellation is way down the list of useful GFX tech


I disagree.
Geometry is going to play a massive role this coming generation.
is much more scalable.


Disagree all you want.  Tessalation is way over done in Nvidia games to make AMD cards look bad.  Play BF4.  Highest poligon counts you will see, and guess what?  AMD card perform way better...

 

AMD cards are just fine at high polygon counts as long as you aren't deliberatily trying to sabotage the competition. (It's not like Nvidia is known to do that /s)

Why are you damage controlling for AMD so much on the tessellation front ? I already knew that AMD was the ONLY option for console manufacturers but I was expressing one of my concerns. As to why the 280X gets beaten by the 270X is because AMD's tessellation engines scales better with clock which is quite sad ... 

Tessellation is over done ? Wow your on a roll to point out the culprit so quickly when it's clearly AMD's fault for not producing a truly parallel solution. If anything AMD are framing themselves by putting in that AMD optimizied tessellation setting in the driver ... 



Around the Network
fatslob-:O said:
Eddie_Raja said:
This is one of the biggest fail threads I have ever seen. In most games you can turn down (or off) the tessellation and not tell the difference without looking really hard. AMD cards are classically better at larger textures and AA, which is far more noticeable. 

On top of that AMD was the ONLY option in terms of pricing. 
1) Nvidia's CPU's are awful.
2) Intel's graphics are awful
3) AMD has good CPU's and great graphics.

P.S. The 280X loses to the 270X in your "benchmark." That makes absolutily no sense so good job finding the most non-nonsensical benchmark I have ever seen.

 

Eddie_Raja said:
Pemalite said:
GProgrammer said:
With my Graphics Programmer cap on

Tessellation is way down the list of useful GFX tech


I disagree.
Geometry is going to play a massive role this coming generation.
is much more scalable.


Disagree all you want.  Tessalation is way over done in Nvidia games to make AMD cards look bad.  Play BF4.  Highest poligon counts you will see, and guess what?  AMD card perform way better...

 

AMD cards are just fine at high polygon counts as long as you aren't deliberatily trying to sabotage the competition. (It's not like Nvidia is known to do that /s)

Why are you damage controlling for AMD so much on the tessellation front ? I already knew that AMD was the ONLY option for console manufacturers but I was expressing one of my concerns. As to why the 280X gets beaten by the 270X is because AMD's tessellation engines scales better with clock which is quite sad ... 

Tessellation is over done ? Wow your on a roll to point out the culprit so quickly when it's clearly AMD's fault for not producing a truly parallel solution. If anything AMD are framing themselves by putting in that AMD optimizied tessellation setting in the driver ... 


That's just my experience.  In Metro: Last Light I turn off Tessalation for a massive Framerate boost for both my AMD and Nvidia build. Does it affect AMD more?  Maybe, but either way its better left off in that game

In Max Payne Tessellation is done correctly and it makes a VERY noticable difference.  The punch line: It makes no performance difference to either of my builds.



Prediction for console Lifetime sales:

Wii:100-120 million, PS3:80-110 million, 360:70-100 million

[Prediction Made 11/5/2009]

3DS: 65m, PSV: 22m, Wii U: 18-22m, PS4: 80-120m, X1: 35-55m

I gauruntee the PS5 comes out after only 5-6 years after the launch of the PS4.

[Prediction Made 6/18/2014]

Eddie_Raja said:

That's just my experience.  In Metro: Last Light I turn off Tessalation for a massive Framerate boost for both my AMD and Nvidia build. Does it affect AMD more?  Maybe, but either way its better left off in that game

In Max Payne Tessellation is done correctly and it makes a VERY noticable difference.  The punch line: It makes no performance difference to either of my builds.

That's probably because metro last light has a low tessellation factor ... Until we see games go above a factor of 32 you really can't say that AMD is just good enough anymore for tessellation. 



fatslob-:O said:
Jizz_Beard_thePirate said:
I do agree but wasn't the reason for choosing AMD was cause Nvidia wanted a lot more money? Least thats what I heard...

That's one reason but another could probably be attributed to the fact that AMD delivers a nice price to performance ratio which microsoft and sony are not blind about.


Another thing is AMD is the only one who can deliver CPU and GPU IP on a single SOC.

Well, I guess Intel can, but Intel lags in the GPU area, plus is unfriendly to work with. Intel doesn't need consoles, AMD does.

To use Nvdia GPU, they would have had to use ARM CPU's. Surprisingly they did consider that pretty strongly, but ARM wasn't quite powerful enough yet.

 



fallen said:
fatslob-:O said:
Jizz_Beard_thePirate said:
I do agree but wasn't the reason for choosing AMD was cause Nvidia wanted a lot more money? Least thats what I heard...

That's one reason but another could probably be attributed to the fact that AMD delivers a nice price to performance ratio which microsoft and sony are not blind about.


Another thing is AMD is the only one who can deliver CPU and GPU IP on a single SOC.

Well, I guess Intel can, but Intel lags in the GPU area, plus is unfriendly to work with. Intel doesn't need consoles, AMD does.

To use Nvdia GPU, they would have had to use ARM CPU's. Surprisingly they did consider that pretty strongly, but ARM wasn't quite powerful enough yet.

i wouldn't want intel GPUs either but if it's one thing that I can give it credit for it would be pixel sync which allows programmable blending. I would be more interested if they offered a larrabee esque solution to the console space. 



Pibituh said:
Price quality ratio. I think that's the reason.
We all know how nvidia can be expensive ~

Yes, also, on the cpu front, I read that intel doesnt allow customs, I think ARM would not give enough power, compliance with PC was nice to have, building its own CPU a lot too much expensive... so except for AMD, there were not so many possibility. Starting with an AMD cpu in your console, it add a lot of appeal for an AMD APU : less chips, better optimizations, 1 provider.