By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - AMD FX 8350 VS Intel Ivy Bridge K series for gaming - interesting perspective

This one review site just tested AMD and Intel CPUs with a GTX670 at 1920x1080 with 4AA in 12 modern games:

http://pctuning.tyden.cz/hardware/graficke-karty/25994-vliv-procesoru-na-vykon-ve-hrach-od-phenomu-po-core-i7?start=16

As you can see, the GTX670 is the main limiting component. There are obviously some games like Starcraft 2, WOW and some online multi-player titles that are poorly threaded. In those cases the Intel CPUs will have a very significant advantage. For most people gaming with less than a GTX670/HD7970 (and no plans on going SLI/CF later), the difference between an FX8320 and i5 is not really going to be significant.

i5/i7 @ 4.8ghz vs. FX8320-8350 @ 4.8ghz in terms of power consumption and performance with 2-3 high-end GPUs, well in those cases the Intel is a no brainer imo.



Around the Network

Great analysis.

I've never understood the constant bashing AMD gets for their CPU's. Sure they are behind* versus Intel, but that doesn't make them worthless as some seem to think. Their APUs are fantastic for the average PC and for casual gaming, and the FX CPUs are great too. (I'd prefer an APU vs. an Intel CPU with Intel HD graphics)

If only they can get their R&D together and stay on track with Steamroller and Excavator.

I plan on building my next PC next year. I'm still undecided with who I'm going with for the CPU. Although, I am leaning more toward an APU or Steamroller FX.



e=mc^2

Gaming on: PS4 Pro, Switch, SNES Mini, Wii U, PC (i5-7400, GTX 1060)

BlueFalcon said:

I can't see games suddenly using 8 threads in the next 2 years. I see no reason at all for IVB-E for gaming even if Intel makes it an 8- core offering. But if you use your CPU for things while gaming or other than games, more cores sounds nice :)


I agree. For gaming for the next few years, 4 cores is probably plenty, heck for most a Phenom 2 x4 is probably enough for another few years. :P

But, I do more than just gaming and I can use every core I can take, which is why I went with this set-up in the first place.  :)
Even while gaming I'm running other CPU demanding tasks in the background, but I usually shove them onto 2 cores and 6 hyper-threaded "cores" and use the 4 real cores for the game I'm running, when I'm not gaming 100% of my CPU time across every thread is being used pretty much 24/7 and then I dedicate all 3 of my GPU's to Folding@Home. (No wonder my power bill is high.)

Soleron said:
@Permalite

ARM chips will not work out for them, because they aren't customising the core so they'll be selling the same or worse as what everyone else with more experience is selling.

I dimiss GPU compute out of hand. I believe it will never be relevant no matter its theoretical strenghts. HSA is the latest in a series of failed AMD initiatives along these lines -> Close To Metal, Torrenza, the R6xx tesselator.

ARM chips can work out, Seamicro has excellent relations to allot of players in the server industry so they already have a foot in the door.
If you are going with a very slow but energy efficient architecture like ARM, then it doesn't matter how each core performs, what does matter is how dense you can make the server, if you want single-threaded performance then go Opteron or Xeon, ARM after-all is in the same league as the Intel Atom/AMD Brazos, so you can't be to worried about the performance in the first place.

As for the R600. It wasn't AMD's first chip to have Tessellation, Look back at the Radeon 9000 and 8000 series with it's Truform, you could even set up The Elder Scrolls: Morrowind (Direct X 7/8 game) to take advantage of it.
Besides, it was a first step and it's probably in part thanks to AMD collaborating with nVidia and Microsoft on it becoming a feature in Direct X 11.
Now we need those consoles to catch up so Console Ports will feature Tessellation more heavily and not look so crap.

disolitude said:


Those Trinity game benchmarks would have been even better had AnandTech OCd the RAM to 1866 or 2133. There have been many sites pointing out that since APU uses RAM for GPU memory, overclocking it to 1866 gives the GPU up to 10% better performance compared to 1600.

http://www.legitreviews.com/article/2106/6/


That is true. However, DDR3 1600mhz memory is the sweet spot in terms of price/performance, it doesn't make sense for someone to spend-up big on the fastest DDR3 Ram in a cheap Trinity notebook.

It actually gets me excited about the move to DDR4 soon and what kind of tangible increases it may bring on the IGP front. :)

BlueFalcon said:

Recording videos of their kids at birthday parties and then encoding it, etc. Opening word documents for basic things, browsing the web with multiple tabs in the browser, viewing high resolution images from their digital camera/DSLR. In these cases a CPU with more cores and a snappy SSD would provide a superior performance to these users than an i3/i5 with a mechanical drive.

With Intel's QuickSync (Provided it's not a socket 2011 processor) and AMD/nVidia GPU's, encoding can be moved off the CPU.
Then you have Browsers, Windoes GUI, Video, Games, Adobe Photoshop heck even WinZip will all offload processing to the GPU, people think
GPU compute won't take off, but it's already been happening for years. :P

Take an old Netbook for instance, the GPU does basically nothing, even things like 720P Youtube struggle, but grab one with an nVidia Ion or a Broadcom Crystal HD chip and the machine is far more usable as a "Internet" device and for those applications listed above.



--::{PC Gaming Master Race}::--

Pemalite said:

 

disolitude said:


Those Trinity game benchmarks would have been even better had AnandTech OCd the RAM to 1866 or 2133. There have been many sites pointing out that since APU uses RAM for GPU memory, overclocking it to 1866 gives the GPU up to 10% better performance compared to 1600.

http://www.legitreviews.com/article/2106/6/


That is true. However, DDR3 1600mhz memory is the sweet spot in terms of price/performance, it doesn't make sense for someone to spend-up big on the fastest DDR3 Ram in a cheap Trinity notebook.

It actually gets me excited about the move to DDR4 soon and what kind of tangible increases it may bring on the IGP front. :)

Ive looked around and it seems ddr 1866 and 1600 hover around the same price. 2133 however is much more expensive. For a trinity laptop I wouldn't bother but for a gaming oriented trinity desktop I'd surely get the 1866 memory and mess with timings and voltage until I can get it as high as possible. Timings really dont matter as much RAM speed for gaming.

Im really hoping that AMD is able to offer Richland APU with a crossfire capability, much like trinity and liliano. However hopefully they can crossfire them with GPUs newer than 6670.

The way I see it...

Richland APU (20-30% better game performance than Trinity) + something like 8670 = AMD being able to offer a platform for a sub 500 dollar mainstream gaming PC. And that would be pretty awesome.



nvm



Around the Network
Pemalite said:

Even while gaming I'm running other CPU demanding tasks in the background, but I usually shove them onto 2 cores and 6 hyper-threaded "cores" and use the 4 real cores for the game I'm running, when I'm not gaming 100% of my CPU time across every thread is being used pretty much 24/7 and then I dedicate all 3 of my GPU's to Folding@Home. (No wonder my power bill is high.)

MilkyWay@Home, Collatz Conjecture, etc. Those projects are going to net you a ton of BOINC points on AMD cards too.

BTW, if you are already willing to run your GPUs at full load, have you considered bitcoin mining? I've been mining with my 7970s and by the time 8970s launch, I should have enough  $ for a 'free' upgrade to those. I suppose it sounds like you genuinely want to help a noble cause :)

Pemalite said:

That is true. However, DDR3 1600mhz memory is the sweet spot in terms of price/performance, it doesn't make sense for someone to spend-up big on the fastest DDR3 Ram in a cheap Trinity notebook.

Sammy 30nm Green DDR3 1600 with 1.35V for $40-50 is a good budget solution. They overclock like mad ;)

http://www.techpowerup.com/reviews/Samsung/MV-3V4G3/6.html

Pemalite said:

With Intel's QuickSync (Provided it's not a socket 2011 processor) and AMD/nVidia GPU's, encoding can be moved off the CPU.

See I think QuickSync sounds great on paper, especially if you just need to encode video to  your smartphone/1024x768 tablet but for high resolution feed, larger monitors, the image quality of QuickSync and especially NV's/AMD's solutions is pants!

"For the time being, the best option for quick, high-quality video transcoding is unfortunately to buckle down, get yourself a fast CPU, and run the best software encoder you can find (which may be Handbrake)."

http://techreport.com/review/23324/a-look-at-hardware-video-transcoding-on-the-pc

 

There are also good deals on FX8000 series CPUs from time to time. 

Right now Newegg has the FX8150 with a solid watercooling kit for $180. 

http://www.newegg.com/Product/Product.aspx?Item=N82E16819106011&cm_sp=DailyDeal-_-19-106-011-_-Homepage

^^^ That's a ton of value right there if you do Folding@Home, or run things outside of games but can't spend $325 on an i7 3770K and $80 on a Corsair H80i. 

I think AMD's FX series are seriously getting a bad rep. I mean they aren't as good as Intel parts but look at how much cheaper they are! Obviously for someone like yourself with an i7 3930K and 3x 7970s, you aren't the target market for $180-200 CPUs.



Pemalite said:
BlueFalcon said:

...


...

Soleron said:
@Permalite

ARM chips will not work out for them, because they aren't customising the core so they'll be selling the same or worse as what everyone else with more experience is selling.

I dimiss GPU compute out of hand. I believe it will never be relevant no matter its theoretical strenghts. HSA is the latest in a series of failed AMD initiatives along these lines -> Close To Metal, Torrenza, the R6xx tesselator.

ARM chips can work out, Seamicro has excellent relations to allot of players in the server industry so they already have a foot in the door.
If you are going with a very slow but energy efficient architecture like ARM, then it doesn't matter how each core performs, what does matter is how dense you can make the server, if you want single-threaded performance then go Opteron or Xeon, ARM after-all is in the same league as the Intel Atom/AMD Brazos, so you can't be to worried about the performance in the first place.

As for the R600. It wasn't AMD's first chip to have Tessellation, Look back at the Radeon 9000 and 8000 series with it's Truform, you could even set up The Elder Scrolls: Morrowind (Direct X 7/8 game) to take advantage of it.
Besides, it was a first step and it's probably in part thanks to AMD collaborating with nVidia and Microsoft on it becoming a feature in Direct X 11.
Now we need those consoles to catch up so Console Ports will feature Tessellation more heavily and not look so crap.

disolitude said:


...


That is true. However, DDR3 1600mhz memory is the sweet spot in terms of price/performance, it doesn't make sense for someone to spend-up big on the fastest DDR3 Ram in a cheap Trinity notebook.

It actually gets me excited about the move to DDR4 soon and what kind of tangible increases it may bring on the IGP front. :)

BlueFalcon said:

...

With Intel's QuickSync (Provided it's not a socket 2011 processor) and AMD/nVidia GPU's, encoding can be moved off the CPU.

- Seamicro's tech is nothing to do with CPU silicon. It doesn't work for high performance servers. Calxeda's fabric does and they have shipping ARM server products; AMD does not.

- Woah. You just argued that AMD was fine in servers because ARM, but then you say ARM isn't suited for single-thread performance. So you must conclude that Intel cleans up on the low thread count servers, which is in fact most of the market (look at 2-socket vs 4-socket stats).

- One game AMD paid to have it included in is exactly my point. For a tech to have succeeded I want to see multiple vendors use it with no payment by the maker. Paying people to make stuff is the entire history of GPGPU up to now, see Nvidia and everything.

And btw no, that's not why it was a feature in DX11. It was in DX11 because IT WAS IN DX10, but Nvidia couldn't do it so MS pushed it out for them screwing over AMD.

- You're saying an extra $20 on RAM 1600 -> 1866 isn't worth a 10% graphics improvement? Because I'd sure pay for that.

- It's not DDR4 that will improve on-die graphics (look at the slides, DDR4 starts at the same performance and worse power consumption than DDR3). It's on-package memory that Intel will introduce in 2013 (Crystalwell) and that AMD isn't even close to releasing.

- QuickSync is awesome,  but it is anti-GPU compute. GPU compute says, why not use a big hot expensive vector processor to encode your videos? And QuickSync says, but if we add a tiny extra piece of dedicated silicon to the CPU it can beat the GPU's performance, do it in 1/10 the power, and have higher video quality. So isn't the implied future small dedicated SoC logic rather than GPU compute?

Programming for QS: "Encode my video"
Programming for GPUs: worse than the Cell



BlueFalcon said:

MilkyWay@Home, Collatz Conjecture, etc. Those projects are going to net you a ton of BOINC points on AMD cards too.

BTW, if you are already willing to run your GPUs at full load, have you considered bitcoin mining? I've been mining with my 7970s and by the time 8970s launch, I should have enough  $ for a 'free' upgrade to those. I suppose it sounds like you genuinely want to help a noble cause :)


If I had the clock cycles to spare I probably would do some extra compute. :)
Hence why I want hardware several multiples faster than what I have! haha

Soleron said:

- Seamicro's tech is nothing to do with CPU silicon. It doesn't work for high performance servers. Calxeda's fabric does and they have shipping ARM server products; AMD does not.

- Woah. You just argued that AMD was fine in servers because ARM, but then you say ARM isn't suited for single-thread performance. So you must conclude that Intel cleans up on the low thread count servers, which is in fact most of the market (look at 2-socket vs 4-socket stats).

- One game AMD paid to have it included in is exactly my point. For a tech to have succeeded I want to see multiple vendors use it with no payment by the maker. Paying people to make stuff is the entire history of GPGPU up to now, see Nvidia and everything.

And btw no, that's not why it was a feature in DX11. It was in DX11 because IT WAS IN DX10, but Nvidia couldn't do it so MS pushed it out for them screwing over AMD.

- You're saying an extra $20 on RAM 1600 -> 1866 isn't worth a 10% graphics improvement? Because I'd sure pay for that.

- It's not DDR4 that will improve on-die graphics (look at the slides, DDR4 starts at the same performance and worse power consumption than DDR3). It's on-package memory that Intel will introduce in 2013 (Crystalwell) and that AMD isn't even close to releasing.

- QuickSync is awesome,  but it is anti-GPU compute. GPU compute says, why not use a big hot expensive vector processor to encode your videos? And QuickSync says, but if we add a tiny extra piece of dedicated silicon to the CPU it can beat the GPU's performance, do it in 1/10 the power, and have higher video quality. So isn't the implied future small dedicated SoC logic rather than GPU compute?

Programming for QS: "Encode my video"
Programming for GPUs: worse than the Cell


In regards to Seamicro and Arm, you are now putting words in my mouth and completely twisting what I say, so I'll leave the debate at that as it's pointless to carry it on, I suggest you go back and re-read what I said in context.

About Ram... Laptops, already come with Ram, So-Dimm DDR3 1866mhz is essentially twice as expensive than So-Dimm DDR1600mhz, and you don't get twice the performance. - Hence why DDR3 1600mhz is the best price/performance, argue all you wan't it's what the numbers and common sense says.

DDR4 is going to be a big improvement, take a look at the JDEC specs.
2133–4266 MT/s compared to DDR3's 800-2133 MT/s with DDR4 voltages sitting around 1.05–1.2 V.
So the cheapest DDR4 sticks will be on par with essentially the fastest and most expensive DDR3 sticks.
One of the best Ram kits you can buy is samsungs green memory and that will at-most top out at around 2600/2800 MT/s overclocked, still doesn't beat DDR4 especially once DDR4 matures and can clock higher than 4266 MT/s.

As for dedicated memory for IGP's AMD wen't down that route with the 790 and 890 chipsets where they packaged dedicated DDR3 ram for the IGP's, it was an adequate increase, but was hampered by the BOM increase so manufacturers stuck with cheap and slow memory.
It's another good increase for IGP's, but if they ever package the memory on die, it's wasted transisters that could otherwise be dedicated towards a beefier IGP or a better CPU.
DDR4 will for awhile put-off that need.

As for QuickSync, it's part of the GPU, it's just a dedicated pipeline that's part of the video decoder engine, hence why Sandy Bridge-E and Ivy Bridge-E doesn't get quicksync, because it doesn't have a GPU.
nVidia and AMD could do the same, but GPU's are increasing in speed at a relatively fast pace and each generation of new cards will automatically get a speed-up, best to spend those transisters on more shaders I suppose.
Then again, if AMD and nVidia saw a market they would probably include similar functionality.



--::{PC Gaming Master Race}::--

@Pemalite

I just noticed your PC specs in the sig. That's some impressive stuff... You have to be pushing close to you PSUs power limit with the 3 7970s and overclocks on the CPU (and GPU I presume). I remember my old rig which ran dual watercooled and OCd GTX 580s and OCd i7 950 to 4.0 was pulling 900 Watts.

This monstrosity... http://gamrconnect.vgchartz.com/thread.php?id=130806

Last time I do GPU watercooling that's for sure. :)



disolitude said:

@Pemalite

I just noticed your PC specs in the sig. That's some impressive stuff... You have to be pushing close to you PSUs power limit with the 3 7970s and overclocks on the CPU (and GPU I presume). 

I actually just realized that after you posted. My 7970s @ 1100-1150mhz are pulling ~480W from the full at 99% load together. That's about 240W per each. 3 of those and an uber overclocked i7 6-core? Hmm....Maybe he undervolted them and keeps them at 925mhz at 0.98V or something.