By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
fordy said:

You do realise that the first Intel Core was built entirely around a mobile concept, right? Desktop performance considerations weren't added until Conroe. The BRAND "Core" (actually codenamed Yonah) had nothing to do with Intel's architectural Tick-Tock "Core" phase (Which is where the Core 2 started)

http://en.wikipedia.org/wiki/Conroe_(microprocessor)


The first "Core" as in Conroe was heavily based upon Yonah, which in turn was based upon Dothan and Banias (Pentium M) which in turn was based upon a heavily modified P6 architecture (That dates back to the Pentium Pro several decades ago) with some new technology (At the time) found in the Netburst architecture. Aka the Willamatte/Northwood/Prescott etc' such as the buses, branch predictors etc'.

If you were around during the Gigahert race you will know AMD was always competitive with Intel starting with the very first Athlon, AMD had the definitive edge in floating point.
It wasn't untill Intel moved the L2 cache on-die mid way through the Pentium 3's life that it actually had a performance edge, that was untill AMD followed suit and then beat Intel to the 1ghz mark.
Then it was the Battle of the Pentium 3 Tualatin and Pentium 4 Willamatte against AMD's Thunderbird which again had the edge. (That said, Even the Tualatin was faster than the Willamatte.)
The Thunderbird wasn't without it's faults though, it didn't scale well with clockspeed and it was a stupidly hot running chip, however they were generally cheaper and could be pencil modded.
It wasn't untill around the Barton based Athlon that Intel's Pentium 4 started to have an edge over AMD, that was untill the Athlon 64 burst onto the scene and completely wiped the floor with Intels Pentium 4, AMD followed that up with the Athlon 64 x2 which easily beat Intels Pentium D and Extreme Edition processors.

Once the Core 2 had arrived, AMD bagan to falter, the origional Phenom was plagued with problems like the TLB bug, low clockspeed scaling and was expensive.
AMD did fix the issues with the Phenom and brought us the Phenom 2, but essentially that was only competitive with Intels prior generation Core 2 processors in terms of IPC, but again they were cheap.

So, pretty much through AMD's entire history starting with the origional Athlon vs Pentium 3 and despite having far smaller R&D budgets, fabrication deficits, they remained competitive with Intel right up untill the Core 2/Phenom.

Prior to the Pentium 3 and Athlon it was the Pentium 2/Celeron against AMD's K6, which whilst competitive in some tasks the performance crown was easily Intels, but in terms of price AMD was the better option. (Unless you count Cyrix in too.)
Granted in the K6 days, PC's were expensive, Cheap + PC didnt really go together, AMD, Cyrix and even IBM helped in that regard as Intel was forced to make a cheap CPU. Enter: The Celeron.

The FX is probably AMD's largest blunder to date, the FX 9590 I've seen selling for over $1,100 AUD here which is almost twice as expensive as the Core i7 3930K, which it still doesn't out-perform, however regular gamers really aren't the target of that chip, it's targeted at overclocking enthusiasts and companies like Dell who can advertise "5ghz" to attract buyers, it's actually very similar to Intels strategy during the Pentium 4 days.
Advertise silly clockspeeds to make the average person think it's faster, it worked for Intel so why not AMD?

Actually, AMD's catch-up with Intel on the FPU front came more with the 3DNow! extensions introduced in the K6-2. This gave them a performance increase by introducing FP SIMD. The problem is, the Athlon never had the advantage that you mentioned. In fact, 19 of the new instructions that AMD added to the 3DNow set for the Athlon were mimics of Intel's SSE instruction set, putting them (according to Andandtech) "on par" with SSE.

AMD's performance gains over Intel were mainly due to Intel's oversights. For instance, both the P6 and K7 architectures could execute 3 instructions simultaneously. However, the P6 limited this to microcode instructions (microcode being essential in moving the x86 to a more RISC architecture at the time), whereas AMD's continued use of CISC allowed them to implement their simultaneous execution at the complex instruction level. Intel also took several hits with their Pentium III line being incredibly difficult to scale to higher clockspeeds (hence their move to NetBurst), and their plan to incorporate DDR SDRAM over the more expensive RAMBUS, which was mandatory with NetBurst in the early stages of its life.

Intel's stupid plight of aiming for higher clockspeeds with little actual performance gain was what caused their Pentium 4s to be operating in excess of 200W TDP (for the record, the recent Haswell desktop chipsets are operating around 60-85W TDP), as well as requiring a lot more measures to keep them cool. This forced Intel to (rightfully) cease NetBurst and work towards a "performance per watt" platform that they worked on with Yonah and the first Core (in fact, it gave impressive performance results for a mobile processor, which is why Intel decided to proceed further with it as their next Desktop architecture).

AMD can do what they like with advertising clock speeds (in fact, their controversial PR rating along with Cyrix was nothing but a marketing stunt, since it failed to resolve the fact that the number made no sense. In some areas, the Athlon excelled while in other aspects, the Intel performed way better than AMD's theoretical PR number). But as I said, if NetBurst has taught us anything, it's that clock speeds mean bugger all in the scheme of things. Your claim that it worked for Intel I'd happily debate, considering that Intel had major contracts with HP and Dell at the time, causing a higher marketshare. In the enthusiast side, gaming PCs back then were mostly Athlons.