By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - Anybody think 8 gb of GDDR5 is a mistake for PS4

Kynes said:

PS4 has an OS. It uses a driver, and a graphics API. You don't program to the metal. The biggest problem with DirectX is the draw calls overhead, but there are ways to hide that overhead. Those efficiency numbers are due to the nature of the graphic chips, the graphics chips ATI had when they developed X360 and the 5way instructions, but since GCN they don't use VLIW architectures, and nVidia since several years ago. You are mixing things, Ethomaz.

PS: Do you know the routing nightmare that 32 chips mean? You want your motherboard simple, with as a low number of layers as possible.

Did you know what are you talking? VLIW matters only for GPGPU purpose not graphics... the change was made to the computing powers match the monstrous graphic power... SIMD over VLIW means more efficiency for computer processing (GPGPU).

PS4 API is close to metal... all developers said that... the libgm used in PS3 (and the new version to PS4) is what the developers call "coding to metal"... even the DirectX in 360 had customized to works close to metal different from the Windows version.... In consoles you have direct access to hardware without abstraction layers.

Yeap... continue to call me a bullshiters... I don't care... I was only sharing what the developers share with us.

About the 32 chips I know... PCB, I/O interface, etc... every components to put 32 chips in the machine was increased but while the 4Gb GDDR5 module is not out I can't estimate the cost over it.



Around the Network
Kynes said:

PS: Do you know the routing nightmare that 32 chips mean? You want your motherboard simple, with as a low number of layers as possible.

Obviously the key problem. Look at the latest NVidia Titan card how it is done (24 chips only). Obviously pcb costs will increase compared to the PS3 (probably from $8 to $20... so no big deal in the end).



drkohler said:
fordy said:


That's highly debatable....GDDR5 is faster, but also has a much higher latency than DDR3. It's prefixed with "G" for a reason, because the high latency doesn't affect a highly parallel GPU as much, due to it's ability to process vast more amounts while it waits for a memory fetch.

I wish people would stop praying the "latency mantra". IT IS FALSE.

We are living in the days of high performance, cache driven memory accesses. Long gone are the days when the CPUs grabbed data from main memory one variable(one code line at a time, and long gone are the days when programmers coded their source with pointer arithmetics and local variables. Nowadays, "ugly" programming is what is called for (large fixed-size arrays for variables, global variables and routines grouped according to access probhabilities, etc). What this means that programmers no longer have the cpu in mind, they "program for the memory controller" for the optimum case.

Why? Good memory controllers use longest possible burst modes whenever possible - this means that your cpu/gpu has close to 100% cache hits for code and very high percentages for hitting data in the data cache. Hence memory latencies of the main memory are almost irrelevant (provided yoiu program "ugly"), but memory throughput becomes decisive. And the gddr5 n the PS4 wins hands down against any other solution.


So why not have the best of both worlds and remove the latency overhead when moving to cache?

There's a reason WHY we aren't using GDDR5 as CPU memory in PC architecture. Use your head.



Devs love it, forum goers say it doesn't make sense.
Lemme think...



ethomaz said:
Kynes said:

PS4 has an OS. It uses a driver, and a graphics API. You don't program to the metal. The biggest problem with DirectX is the draw calls overhead, but there are ways to hide that overhead. Those efficiency numbers are due to the nature of the graphic chips, the graphics chips ATI had when they developed X360 and the 5way instructions, but since GCN they don't use VLIW architectures, and nVidia since several years ago. You are mixing things, Ethomaz.

PS: Do you know the routing nightmare that 32 chips mean? You want your motherboard simple, with as a low number of layers as possible.

Did you know what are you talking? VLIW matters only for GPGPU purpose not graphics... the change was made to the computing powers match the monstrous graphic power... SIMD over VLIW means more efficiency for computer processing (GPGPU).

PS4 API is close to metal... all developers said that... the libgm used in PS3 (and the new version to PS4) is what the developers call "coding to metal"... even the DirectX in 360 had customized to works close to metal different from the Windows version.... In consoles you have direct access to hardware without abstraction layers.

Yeap... continue to call me a bullshiters... I don't care... I was only sharing what the developers share with us.


AMD has left the VLIW architecture mostly due to the problems they had to optimize the real time compiler they had to use to feed the 5WAY units. It also helped on the GPGPU front, but ask any bitcoin miner if the 5870 graphics chips were good to do mining. PS3 api is mostly OpenGL, you don't use assembler to develop for it. Even CUDA is a high level language, and it's the closest to the metal you develop on nVidia cards.

Of course I will keep on calling you a bullshiter, because that's what you are doing right now.



Around the Network
fordy said:


So why not have the best of both worlds and remove the latency overhead when moving to cache?

There's a reason WHY we aren't using GDDR5 as CPU memory in PC architecture. Use your head.

Let me guess: you did not understand what I wrote and you don't understand why there is memory latency anytime, anywhere. The three key reasons why we never had gddr5 as main memory in the pc world are (by order of importance): price, price and price.



drkohler said:
fordy said:


That's highly debatable....GDDR5 is faster, but also has a much higher latency than DDR3. It's prefixed with "G" for a reason, because the high latency doesn't affect a highly parallel GPU as much, due to it's ability to process vast more amounts while it waits for a memory fetch.

Why? Good memory controllers use longest possible burst modes whenever possible - this means that your cpu/gpu has close to 100% cache hits for code and very high percentages for hitting data in the data cache. Hence memory latencies of the main memory are almost irrelevant (provided you program "ugly"), but memory throughput becomes decisive. And the gddr5 n the PS4 wins hands down against any other solution.


Well, according to http://fgiesen.wordpress.com/2013/03/05/mopping-up/ it is quite crucial in which way you write your code if you want it to run *efficiently* thus using caches the right way.



fordy said

There's a reason WHY we aren't using GDDR5 as CPU memory in PC architecture. Use your head.

That will change... AMD will use GDDR5 for the new APUs too.



ethomaz said:

fordy said

There's a reason WHY we aren't using GDDR5 as CPU memory in PC architecture. Use your head.

That will change... AMD will use GDDR5 for the new APUs too.


So you say they won't use DDR3 or DDR4 on their new APUs? Do you want to make a bet?



8GB of GGDR5 RAM is an amazing move. Most games today don't use that much RAM because those games are mostly tied to the development of the 7th generation despite some PC games being more advanced, the new engines, and new graphic effects are hardly in use.

The high overhead in a typical PC also effects the graphical capabilities of a machine and most PCs can hardly squeeze the power of its set up compared to a console system with a low overhead and dedicated customized GPU/CPU; not to mention PC developers have a spec target to reach making a lot of PC games not as cutting edge as some might think.

You also need to take into account PC games, despite the ability to customize the graphics quality is built around a goal RAM so that they don't isolate themselves out to a small install base. The 8GB is going to be a big help to prevent the bottleneck of the GPU down the road, which the PS4 is relying on heavily for its graphics. It isn't necessary to have this much RAM, but if it had less it would strain developers, increasing costs and reducing the graphical capabilities which is a waste for the set up in the PS4.

The added costs is supposedly around $50, which is small for the return they get from it, much more successful later years of the system. In total the PS4 might cost $500 tops to build, package, and deliver to retailers. They will probably sell it for $400 and lose $100 on each unit. Compared to the $240+ they lost on the PS3 that is nothing, especially when you take into account they didn't spend a crazy amount of R&D developing technologies for the device. The hidden cost of the PS3 is far beyond the original $240+ loss a system. That isn't the case for the PS4. They could probably sell that thing for $200 and still lose less money. That would be crazy though for Sony right now and in general, just trying to show perspective.



Before the PS3 everyone was nice to me :(