By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - Understanding why Xbox One doesn't need GDDR5 RAM

HoloDust said:
disolitude said:
Guys guys...GDDR5, the Cell...no need to continue arguing about that.

My point here is this...

Xbox one is an AMD GPU with 768 cores which puts it between HD 7770 and H7790 in terms of power.
Those cards come with GDDR5 RAM and Xbox One has higher memory bandwidth than both. It doesn't just need GDDR5 to make the most of the tech it has under the hood.


Actually, it's most likely based on Bonaire (7790, more similar to Pitcairn and Tahiti than to Cape Verde), with 2CUs less. Before launch, 7790 was supposed to come with 256bit bus, but production cards ended up with 128bit bus and lower bandwidth (92GB/s). Though on paper it's looks like 7770 equivalent, XOne's better memory susbsystem should most likely, as you already noticed, put it in between 7770 and 7790.

Still:

"With graphics, the first bottleneck you’re likely to run into is memory bandwidth. Given that 10 or more textures per object will be standard in this generation, it’s very easy to run into that bottleneck," Cerny said. "Quite a few phases of rendering become memory bound, and beyond shifting to lower bit-per-texel textures, there’s not a whole lot you can do. Our strategy has been simply to make sure that we were using GDDR5 for the system memory and therefore have a lot of bandwidth."

Another advantage of GDDR5 (at that bandwidth) seems to come from different perspective:

"One thing we could have done is drop it down to 128-bit bus, which would drop the bandwidth to 88 gigabytes per second, and then have eDRAM on chip to bring the performance back up again," said Cerny. While that solution initially looked appealing to the team due to its ease of manufacturability, it was abandoned thanks to the complexity it would add for developers. "We did not want to create some kind of puzzle that the development community would have to solve in order to create their games. And so we stayed true to the philosophy of unified memory."

Not sure how much this will influence 3rd party devs, but it seems that PS4's approach is simpler to use from the start - this will probably not affect bigger devs, but it can help smaller and indie devs making console games for the first time.

Someone, somewhere (I think it was actually here on VGChartz) had a good observation about all this - DDR4 was late, GDDR5 was too expensive, so MS decided to go DDR3 + enhancements approach. Sony made a gamble - and it payed off.

IMO, what is most interesting in PS4's architecture is not GDDR5, but enhancements built into GPU - direct bus from GPU to CPU, that bypasses cache, reducing synchronization issues between the two and significantly reduced overhead of running compute and graphics task at the same time, due to two additional enhancements to GPU.

"There are many, many ways to control how the resources within the GPU are allocated between graphics and compute. Of course, what you can do, and what most launch titles will do, is allocate all of the resources to graphics. And that’s perfectly fine, that's great. It's just that the vision is that by the middle of the console lifecycle, that there's a bit more going on with compute."

Full article @ http://www.gamasutra.com/view/feature/191007/inside_the_playstation_4_with_mark_.php

Pretty interesting stuff...

Microsoft will probably provide an API/SDK which will automatically allocate resources to the sRAM as needed to remove the pains associated with doing it manually by the devs. Kind of like Grand Central Dispatch did for macs and multicore processing allocation...

I don't wan't people to think that I am knowing PS4 here. Im sue its a well designed balanced console as well. 



Around the Network

What I read:

X1 doesn't need better memory because its not powerful enough to utilize said memory.



Before the PS3 everyone was nice to me :(

Ok
As for an rock stupid lowlife idiot as I am please explain me anyone please - will a top notch game on the XO look beautiful or even better for me explained how much better will a good looking game on the XO look better than a visual masterpiece on the 360.
Ok I have seen some screenshots and footage so far but I have mixed feelings about the visuals for next gen, don't get me wrong here, but neither killzone shadow... or titanfall really knocked me out of the shoes. Maybe we or I don't have to expect superior visuals, maybe I should have to expect better AI, no or short loading times, more moving, reacting, destructible, physically correct acting objects????
Please Anyone? Anyone? Bueller?



Two flaws:
1. The eSram is huge and actually causes the APU to quite larger than the PS4s, even though the PS4 has a better GPU. The One's APU will always be larger and more expensive because of the eSram.
2. Cramming 5 gigs of data, let's say 2gigs at any given time is being drawn, into 32 megs of ram is a headache inducing problem. This will cause a lot of issues and will actually lower performance since the slower DDR3 will be used more often than not.

Though Holodust pretty much mentioned everything I typed. Though, there isn't some magic API that Microsoft can create for memory allocation. The best they've managed to come up with is using a stack (Data is actually allocated via memory stack by the O.S.), there's very little you can automate to decide what should or shouldn't go in the eSram. You can document tips to get the most out of it though, but there is no easy way with it. Though most devs probably already know what to do since the PS2 actually only had eDram for a framebuffer. Gamecube, Wii and 360 use eDram too. Though the eSram is quite tiny compared to the system ram so it's actually even worse (32mb for 4mb eDram on PS2, 40mb for 3 mb eSram on GC, 512 for 10 mb eDram on 360, 5 gb for 32 mb eSram on Xbox One).



I made an account just to show people some benchmarks of how much faster RAM effects the A8/A10 APUs out right now. If the APU's in the next generation systems are anything like the A10 faster RAM will be a HUGE improvement over the tighter timed yet horridly slow DDR3 RAM.

Nothing is out yet so at the end of the day it is still wait and see. That said if this does translate to consoles we will probably (almost certainly imo) see a difference in texture load times, draw distance, texture quality at a distance, AA, and AF among other RAM intensive settings at the very least from first party devs.

I would even put money on the Ps4 being the lead platform for next generation.

http://us.hardware.info/reviews/4372/mini-review-the-effect-of-memory-speed-on-amd-trinity-apus

http://www.youtube.com/watch?v=z0Z7H1PYUkk



Around the Network

First off the AMD HD 7770 is a low end card and the HD 7790 is an overclocked version of the HD 7770 however they both come stock with GDDR5 ram as standard..

-The Xbone has the equivalent of an HD 7770, is a slower gpu in comparison with GDDR5 memory but Microsoft decided to make a slow gpu in comparison slower with DDR3.

DDR3 its old and slow for todays GPU standards, its not even close to the speed of GDDR3 either..

DDR3 is great and fast for system memory, operation system, multitasking and whatnot, but not up to standard for GPU intensive graphics at higher resolutions at a higher settings,, it would be much harder for the DDR3 memory to render stuff fast while trying to keep those frames per second above 30 at 1080p without pop-ins and less likely to achieve 60 without cutting corner (lower settings) and sub 720p to make that game look good and playable.

Doesn't matter how people would try to spin it, the nature of this, DDR3 clock per clock, bandwidth speed per speed, DDR3 is years behind GDDR5 design architecture.. as far back as gpus with DDR3 from 2006 - 2008.

For example: 2gb GDDR5 clocked @ 800mhz vs 2gb DDR3 clocked @ 1200mhz, guess who's faster? clock speed don't matter. the architecture of these memory type is like night and day.

The PS4 has the equivalent of an HD 7870 with GDDR5 memory, its a mid range card and a great performer .. would not compare both as this wouldn't be fair ..

The card alone on the PS4.. graphics card vs graphics card without the memory. The 7770, again wouldn't be fair to compare it with the 7870. it would a joke if someone tries to compare these two GPUs as they stand now ..HD 7770 ddr3 vs 7870 GDDR5.

No matter how Microsoft or anyone would try to spin it, game developers and computer programmers know that faster is better .. I guess Microsoft never had the thought of including at the least GDDR3 for their GPU, its still slower than GDDR5 but I assure you it can run circles around DDR3 memory.

And to think of Microsoft will carry the XBONE for the next 7 to 10 years .



My Trigger Happy Sixaxis controller

 


                            

Cleary397 said:
disolitude said:

cheaper manufacturing cost.

 

Is that why the launch price is higher?


Just a joke, but disolitude probably should have pointed out that DDR3 has been out for years, and as its marketshare is expected to drop next year and the year after, its production price should increase. 

Meanwhile, the GDDR5 is a newer standard, and should gain market share, and as it does so, expect a decrease in cost.



petalpusher said:
There s no latency advantage with DDR3, it's a myth. In fact GDDR5 has better latency

This is of course false. "Latency" is a buzz word many use without knowing what it really means or where it comes from. It has even been abused by some here to explain why the XBox One can switch from watching tv to gaming to skyping to you-name-it in an instant. This has absolutely nothing to do with memory latency whatsoever (in essence, having everything in one hdmi in/out channel removes the copy protection handshaking required when switching different hdmi inputs). It is simply a task switching feature of the OS (obviously MS programmers knew what they needed to do).
Memory latency between ddr3 and gddr5 memory comes from two contributions:

1. Latency from telling the memory what to do

2. Latency from actually doing what you want (basically read or write a buffer line)

Point 2 is easily explained: there is no difference between ddr3 and gddr5 memory because both are accessing the same dram cell contruction, so this latency is identical.

Point 1 is more favourable to ddr3 memory. Telling a gddr5 chip what to do is an extremely complicated matter that introduces more clock cycles than commanding a "simple" ddr3 chip. Hence the latency from this point 1 is always higher with gddr5 than with ddr3 (But nowherer near the 8-10 times purported in articles). However, it is important to note that just how much more time is needed depends on how the memory controller works, and if various power savings features are enabled in the gddr5 chips. GPUs in PCs always use memory control settings that go for maximum bandwidth at the cost of worst latency (because latency in a pure gpu-system is almost irrelevant).

At this point, we have no idea how memory controllers are designed/setup in the PS4, or if power savings features are enabled or not in the gddr5 chips. What we know from M. Cerny is that the major latency obstacle in PCs (getting something from cpu/gpu to gpu/cpu over the pcie bus) has been completely eliminated in the PS4 design with a direct command bus. We also now there are cache controllers in the dual Jaguar setup that can handle the gddr5 interface, so we can assume the engineers that designed the controllers know what they had to do. I'm pretty sure the engineers around M. Cerny had a satisfactory answer to most of the problems.

This leaves us to the question of "Is the higher gddr5 latency a problem at all?" This can be answered only by the people who write the software. Take an extremely lousy coder and you will choke the bandwidth with stalls, taking you straight to latency hell. Take a good programmer who knows how to organize data and code, and stalls with noticable latency will be almost inexistent in code fetches and negligent in data fetches.



AMD about GDDR5.


A weak GPU core become fast due GDDR5 over DDR3... FACT.

http://www.amd.com/us/products/technologies/gddr5/Pages/gddr5.aspx



drkohler said:
petalpusher said:
There s no latency advantage with DDR3, it's a myth. In fact GDDR5 has better latency

This is of course false. "Latency" is a buzz word many use without knowing what it really means or where it comes from. It has even been abused by some here to explain why the XBox One can switch from watching tv to gaming to skyping to you-name-it in an instant. This has absolutely nothing to do with memory latency whatsoever (in essence, having everything in one hdmi in/out channel removes the copy protection handshaking required when switching different hdmi inputs). It is simply a task switching feature of the OS (obviously MS programmers knew what they needed to do).
Memory latency between ddr3 and gddr5 memory comes from two contributions:

1. Latency from telling the memory what to do

2. Latency from actually doing what you want (basically read or write a buffer line)

Point 2 is easily explained: there is no difference between ddr3 and gddr5 memory because both are accessing the same dram cell contruction, so this latency is identical.

Point 1 is more favourable to ddr3 memory. Telling a gddr5 chip what to do is an extremely complicated matter that introduces more clock cycles than commanding a "simple" ddr3 chip. Hence the latency from this point 1 is always higher with gddr5 than with ddr3 (But nowherer near the 8-10 times purported in articles). However, it is important to note that just how much more time is needed depends on how the memory controller works, and if various power savings features are enabled in the gddr5 chips. GPUs in PCs always use memory control settings that go for maximum bandwidth at the cost of worst latency (because latency in a pure gpu-system is almost irrelevant).

At this point, we have no idea how memory controllers are designed/setup in the PS4, or if power savings features are enabled or not in the gddr5 chips. What we know from M. Cerny is that the major latency obstacle in PCs (getting something from cpu/gpu to gpu/cpu over the pcie bus) has been completely eliminated in the PS4 design with a direct command bus. We also now there are cache controllers in the dual Jaguar setup that can handle the gddr5 interface, so we can assume the engineers that designed the controllers know what they had to do. I'm pretty sure the engineers around M. Cerny had a satisfactory answer to most of the problems.

This leaves us to the question of "Is the higher gddr5 latency a problem at all?" This can be answered only by the people who write the software. Take an extremely lousy coder and you will choke the bandwidth with stalls, taking you straight to latency hell. Take a good programmer who knows how to organize data and code, and stalls with noticable latency will be almost inexistent in code fetches and negligent in data fetches.

I agree.

DDR3 has better latency.

But for moderm CPUs the difference between DDR3 and GDDR3 I think will not be that huge... that latency advantage for a game console OS won't exists.