ethomaz said:
Yeap... and added support to a high price memory means what? |
They want better graphics performance? They will use both memory types at the same time.
ethomaz said:
Yeap... and added support to a high price memory means what? |
They want better graphics performance? They will use both memory types at the same time.
ethomaz said:
I'm just arguing because the guys here are saying I'm worng but they are wrong... I just wrote the name of the API wrong because I didn't remembered. They are mixing the LibGCM with PSGL. |
Well, then let them mix it. If they were devs they would know but as we all are not graphics programmers it is useless. Sony provides low-level-apis as well as Nintendo and MS and that's it.

| fordy said:
|
Wow.. a "s/he said..s/he said" thread... grrreat.
Look, it is apparent you don't know how memory access works. Yes, gddr5 is here to handle large chunks of data. Guess what, ddr3 memory is also here to handle large chunks of data! My first post tried to explain it in simple terms why the ggrd5 latency problem has become a myth (assuming what I called "ugly" programming). Technically, there are a lot of what-ifs and things get rather complicated rather fast even though we are "only" dealing with a "get some memory" problem.
What you apparently don't know is the simple fact that every memory controller in every gpu/cpu in the world has a limited burst length. Whether it is a memory controller in a gpu or a memory controller in a cpu, they HAVE to use multiple burst sequences to do whatever they are supposed to do. If you want to learn what this means on the transistor level, you'd have to find controller manuals and figure out the timing diagrams (did I mention that I DESIGNED memory cards and memory controllers decades ago?).
In the case of a gpu, many, many, many bursts may target consecutive memory addresses. In the case of a cpu, many, many, many bursts may target consecutive memory addresses ("ugly" programming"), or small bursts target non-aligned target addresses ("clean, old style programming). The nez result is that bandwidth wins over latency since today's caches are so big that many, many, many bursts happen more often than single bursts.
This is getting waaaay too technical. So here is the ultimate result: 8Gb gddr5 in the PS4 wins hands down against any other pc-like setup. (It will be interesting to study the MS solution in the NextBox in detail, should that ever be revealed), but it is already known they use dedicated hardware "move" engines to connect ddr3 to edram to gpu to remedy the bandwidth problem).
....and this ends my contribution to the thread.
The PS4 is a social machine and also a basic video render program. It needs all the ram it can get. A lot of the features need to run while the game is active. That's where the ram is needed. The games too. But people want all their programs running same time and smooth. Especially when recording your playthrough's. Rendering files needs a huge amount of ram.
When I video edit, for example. I use around 5GB. When I render it out, it jumps to 16GB. This is why I have 24GB.
| Kynes said: Ethomaz, LibGCM is a bypass for some things, for others libGCM just relies on OpenGL. There is even a wrapper that executes OpenGL native code, and 99% of the developers use that wrapper. I can't believe you have ever developed anything. |
Well, it appears that he doesn't know that Open GL ES is a low level API so...
Was going tor reply but really i don't know why i bother,
A) because well... what always happens.
B) You've got a clear handle on things.

| Kynes said: Ethomaz, LibGCM is a bypass for some things, for others libGCM just relies on OpenGL. There is even a wrapper that executes OpenGL native code, and 99% of the developers use that wrapper. I can't believe you have ever developed anything. |
Here a good article about the PS3 DevKit.
http://sandstormgames.ca/blog/tag/libgcm/
Talks about the PSGL (the API based in OpenGL and nVidia GC) and the libGCM ... showing inclusive you only can access all the hardware features using the libGCM.
Just focus one minute in the PS3 DevKit... not PC developers... did you have access to the PS3 DevKit? If not ask to someone the have... I'm not in disagree with you for trolling or make me looks better than you... I only just showing how the PS3 games are made using the LibGCM instead the OpenGL based API.
The LibGCM didn't have even features present in OpenGL 1.0... LibGCM is a low level API and that makes all the difference when you works with a fixed hardware...
BTW I agree with you the multiplataform developers uses a OpenGL wrapper but the the exclusives games for PS3 are developed directly using LibGCM.... I think that the confusion you are making here.
Edit - I didn't even entered in the subject the PSGL (the OpenGL in PS3) is emulated over the libGCM
... that means the OpenGL in PS3 call the functions in libGCM.
There's really only two types of people that can answer that question. Console game developers and SONY software engineers.
4 ≈ One
| theprof00 said: Devs love it, forum goers say it doesn't make sense. Lemme think... |
forum goers are always right -experts are always wrong.
The same with football(soccer) forums.There are several experts,coaches and highskilled players to build a good team for playing successful football but the forum troll and self proclaimed best coach in the world who even can't play football or is at best a 3rd class player(being a prof he would not troll around in forums) is always knowing better,100%right and complaining(mostly by simply repeating stuff he has read in the newspapers) without ever realising that managing a football team is more than just posting around in forums and kicking a ball twice a year. As the developers are already working with that nextgen stuff and will do so for the next 6-10 years they should know what they are talking about.
I'm far from being an expert,but I think the low RAM use and graphic standard of current high end pc games are a result of multiplattform console compatible games and they are far from the optimum (because of low sales noone really cares to optimize for pc games),as they are slowed down and handicapped by consol versions and (downward) compatibility for all the different pc settings.
Nope - you have to consider that a better GPU could cause heat and power draw issues. At least x2 RAM allows for larger creativity than a very minor improvement in graphics.