By using this site, you agree to our Privacy Policy and our Terms of Use. Close
fatslob-:O said:

HDMI is a hardware feature built into the GPU's display engine so it can't be backported unless a vendor overengineered their initial implementation but this is generally a bad idea since specs can change and it can also lead to interoperability issues as well ... 

Okay. I am just going to quote both of these as it's a bit of a contradiction and say nothing else as it isn't required. (Graphics Core Next is modular.)

fatslob-:O said:

The reason why some of AMD's mobile chips have better video engines than their desktop counterparts is down to the fact that they had an extra ~6 months to bake in more features ... 

***************************************

fatslob-:O said:
Pemalite said:

Can't think of any HDMI 2.1 devices that can do 4k 120Hz correctly. There are TVs and monitors that can do 4K 120hz, by using Y′CBCR with 4:2:2 or 4:2:0 subsampling... So I don't see the importance personally.

But no reason why they can't backport it to an older Graphics Core Next design anyway... Wouldn't be the first time AMD has done that. (Hence why some of their mobile chips had better video engines than the desktop GPU at some points!)

YCBCr colour space isn't optimal for interactive content such as games. You want full RGB and that especially applies in the case of HDR content as well .. 

HDMI is a hardware feature built into the GPU's display engine so it can't be backported unless a vendor overengineered their initial implementation but this is generally a bad idea since specs can change and it can also lead to interoperability issues as well ... 

The reason why some of AMD's mobile chips have better video engines than their desktop counterparts is down to the fact that they had an extra ~6 months to bake in more features ... 

Doesn't need to beat Intel ? In an x86 market, it is a "winner-takes-all" competition. If Intel has just even a 5% performance advantage then they win by default! I don't think you realize the reality of just how cut throat competition can get ... 

If anything, today's reviews on Navi made a strong case for why AMD should wait on their Zen 2 APUs to integrate Navi since their RX 5700 XT is like for like to the Radeon VII in 1080p gaming. This sort of efficiency is very valuable in the portable space since lower resolutions are often encountered which makes Navi significantly more efficient even in comparison to the 7nm Vega. Integrating previous graphics architectures isn't ideal from a scalability standpoint because there's still an architectural sore spot regarding lower resolution graphics performance. Using less power and die area to achieve comparable performance at lower resolutions is optimal for the portable space ...

As for Intel's nodes being "on track", it definitely doesn't look like it from the outside perspective and we also have no idea if Intel are ever going to use smaller nodes for desktop CPUs ... 

AMD hasn't definitively beaten Intel in 15 years in the CPU space until just now. They survived.
Stars did well until AMD didn't keep up the cadence as they shifted their focus towards Bulldozer... Which was actually a performance-per-clock regression in many scenarios over Stars... Especially Thuban with it's NB clock pushed to 3ghz. - Their marketshare tanked as a result, deservedly so.

In-fact for the majority of AMD's history, they were behind Intel, even during the K7 years AMD was often behind Intel. - It wasn't until K8 that AMD started to take a lead, however short lived that was.

As for Navi or the Radeon RX 5700XT... Anyone who buys those GPU's for 1080P gaming is a moron or is chasing greater than 60fps, they aren't the ideal resolutions that showcases Vega 7's insane memory bandwidth... In saying that, Navi does blow out transistor counts somewhat.

Of course Intel are going to use smaller nodes for Desktop CPU's. Wow. 10nm isn't where the buck stops.

JEMC said:
haxxiy said:

Nah, power efficiency rules the day.

I'm sure within the next decade, at most, computer parts will start to be a strong target of power consumption regulations, specially GPUs and servers. So better start to appreciate it right now!

Don't get me wrong, the 3700X is great and it's amazing how it manages to do what it does at just 65W. But, in some of the reviews I've read, these chips seem to be a bit limited, specially when it comes to overclocking, because of its low power limit.

That's why I think that the 3800X, with a bigger limit, could be the best choice, specially for those that want to get as much of those chips as they can.

I am actually impressed with AMD's chips... Keep in mind that relative costs as well, the AMD chips tend to come a bracket cheaper than the Intel competitor and offer more threads.

I would personally settle for nothing less than the full 16-core chip, but that's just me... Even then I am not sure if dual-channel DDR could keep it fed, but I won't know until I see benchies.
But the 8-core 3700X is nothing to sneeze at either... Especially for $329 USD.



--::{PC Gaming Master Race}::--