Pemalite said:
Arkaign said:
With an equal GPU generation, ddr3 is always slower than gddr5, by a fairly huge measure. Look up the benches of the 7750 ddr3 vs 7750 gddr5. The performance is off by 50%.
|
Generation has absolutely nothing to do with it, you fell for the marketing, hook line and sinker.
DDR3 1333mhz on a 512bit bus is as fast as GDDR5 1333mhz on a 256but bus. DDR3 1600mhz on a 512bit bus would thus by extension be faster than GDDR5 1333mhz on a 256bit bus.
Obviously you double the clocks for the DDR3 and Quaddruple the clock for GDDR5, but DDR3 can both be cheaper and faster than GDDR5 under the right circumstance.
S.T.A.G.E. said:
MS thought their hardware was up to snuff until last year in February when sony revealed the PS4 specs. Thats when they ran franctic for the hills because everything Sony does scares them for some reason. They jumped on the clock speed and everything as quick as they could. It was all over the gaming news and all. Sony not only backed them into a corner technologically, it was almost like a real-life check-mate. Its definitely a hardware issue, but is not as much of one as they are making it. I am sure they are mad at AMD for giving Sony a superior graphics card. There were leaks that MS wasnt happy with AMD when Sony's specs were revealed. They expected AMD to give them the jump on Sony's plans but AMD kept quiet. MS will need this cloud to work. If it doesn't Sony proved it was all hardware.
|
Nah. AMD would have offered both companies their entire I.P of graphics and central processing units for use in the consoles, it was upto Microsoft and Sony to decide what they wanted.
Essentially though because Microsoft wanted 8Gb of Ram from the out-set they settled with DDR3 as it's cheap and plentifull, but in order to make up for the performance deficit by going that route they had to make up for it with eSRAM. The eSRAM takes up a ton of transisters on the die, which drives up chip complexity and costs, thus by extension something had to give, which just happened to be the GPU. Microsoft could have went with DDR3 on a 512bit bus which would have provided enough bandwidth, but that would have driven up PCB complexity and forced the use of a more transister-hungry memory controller.
Sony however, were origionally going to go with 4Gb of Ram, because of costs. However before the consoles launch, higher density memory became available which made the upgrade from 4Gb to 8Gb actually feasible, it was a gamble that paid off and Sony should be commended for it, they didn't have to throw a beefier memory controller at the console, they didn't need more memory modules on the motherboard and it didn't drive up PCB complexity.
Over time, the Xbox's DDR3 is going to continue to get more and more expensive as the PC market shifts over to DDR4 and it's derivatives, the Playstation 4's memory however should get cheaper due to scales of economies, with a fast selling console and low-end PC cards starting to use GDDR5 more and more, production will be scaled up so all those devices will benefit.
On the flip side, Microsoft can take advantage of newer fabrication process's to cost reduce it's machine.
Essentially, there was no colluding or one company annoyed at another, just one company put the puzzle together better than the other this time, which is great for the consumer, competition is a wonderfull thing.
|
No, you're making assumptions based on trying to add possible differences that COULD be there, but AREN'T there, to suit your argument.
All other things being equal (bus width, generation), GDDR5 is dominantly faster than DDR3. Saying that "Oh, DDR3 can be faster than GDDR5" is pointless, because hell, DDR1 at a hypothetical 20Ghz @ 1024-bit bus would blow everyone's brains out.
Hypotheticals that are so far from reality are basically worthless.
Fact : GDDR5 on PS4 has waaaaay more bandwidth for graphics throughput than the DDR3 setup on XB1. That's just a fact.
Even with the myriad of PC configurations, it's incredibly rare to ever see a same-gen card with DDR3 even come close to a same-gen card with GDDR5. Why is this? Simple, GDDR5 costs more, so they don't pair it with gimpy GPUs.
In the SPECIFIC case of the XB1 vs. PS4, you're dealing with the exact same gen APU and GCN architecture, yet one has a larger GPU and dramatically faster memory for video processing. End of story. No hypothetical will ever change that.
That's not marketing, that's just the chips, and how they fell, along with common sense.
The 7750 DDR3 vs. 7750 GDDR5 variants are the perfect case for it because you can compare an otherwise identical GPU core with both common types of memory. Apples to Apples, which is EXACTLY the point as guess what, the XB1 and PS4 BOTH HAVE 256-BIT MEMORY BUS! The 7750 DDR3 variant is about the fastest DDR3 GPU of all time as well.
And nobody sane would ever put DDR3 on a 512-bit bus for a GPU in a modern setup, as it's extremely expensive to make PCBs that support a bus that wide, and such a product would be too slow to justify the expense, hence : only GDDR5 is used for very wide buses now at the top tier of GPUs. DDR2 and DDR3 cards top out most commonly at 128-bit in most cases for PC GPUs, with many being as low as 64-Bit. The XB1 itself is a bit of an outlier with the 256-bit interface, which isn't really a coincidence when you consider that a standard dual-channel setup in a desktop APU (Llano, Richland, etc) is dual-channel 128-bit DDR3 (for 256-bit).