By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - PS4 Blowout: GPGPU, DirectX 11; Sony London to “Set the Bar for the Industry”, Global Illumination, Instant Radiosity, More

I don't think you'll ever see a PS3 to wii gap again, not even close. The N64 to PS1 was no where near that. Both the PS3 and 360 were tremendously more powerful than the wii. But it was a unique situation. Both 360 and PS3 were over-engineered (in my opinion) in order to make that dramatic step into HD (and to push BluRay). They were so expensive to make and sell that they were money losers for such a long time. That's why we could see them wait 7 or 8 years for a successor. Just bizzare. And equally bizzare was Nintendo launching a system that was barely beyond Gamecube in (pocesing) power.

 

We have reached a different place in games history though. In the past developers had great game ideas but were constrained by the power of the computers they were working with. That's not really the case any more. Even XBox and PS3 can make amazing games and visuals. The constraints are more on time and effort that it takes to make these games. You can have a wonderful graphics card but you will never see it's power unless you can pay programers to make more and more beautiful details using the tech. There is a limit to how much you can do that and still make a profit. And definite questions as to whether it makes a game better and is worth your time at all.

 

The WiiU is a significant improvement over PS360. 4X the RAM is a nice jump. It's graphics chip is more advanced. Thinking the PS4 will be like the PS3vsWii is not reasonable at all. The PS3 was upwards of 8 or 10 times more powerful than wii. PS4 would need 20 gig of RAM and a graphics chip that hasn't even been dreamed of yet to do that. PS4 will be more powerful than WiiU, I'm sure there is little question. But it will be somewhat (2X? 4X?) more powerfull, not dramatically more powerful. We have entered the HD age so leaps will be less impressive.



Around the Network
Soundwave said:

I posted this even before .... Playstation 4 predictions

- A10 main APU (roughly 200 GFLOPs) + 28nm customized 7870 GPGPU (approx. 1.8 TFLOP)
- 4GB main RAM (DDR4)
- New Dual Shock Move controller (transforming controller, turns into a dual wielded Move)
- Ships with 720p (HD) PSEye Camera to go with the new controller

- Can wirelessly stream a video signal to tablets or smartphones or the Vita. 

- x4 USB 3.0 slots, Blu-Ray drive
- Propietary Sony-brand HDDs only (people will complain about this).
- $399.99 Basic (32GB flash), $499.99 Deluxe (game + HDD included)
- November 2013 Launch w/ Uncharted Trilogy (1080P), Battlefield 4, Modern Warfare 4, Madden NFL, GTAV. Metal Gear Solid Ground Zeroes as well in launch window.
- Same size as the PS3 Slim (first rev), 100 watt power consumption, low power mode.

It will destroy the Wii U in raw horsepower.

100W idling or 100W max? o_O; actually 100W at idling is even too low to be possible if that's the spec you are hoping for.



Soleron said:
Scoobes said:
...

I'm pretty sure Matlab has a toolbox that utilises the GPU as well as other scientific based software, especially for molecular modelling and bioinformatics. I also remember reading that some anti-virus software had started making use of the GPU to speed things up.

All I'm seeing about the antivirus is claims about speedup. Do you have a download link or purchase link?

Re: Matlab, what they have is the same as the AMD driver: tools that allow you to speed up code that YOU write. You have to do the coding yourself, and that is the hard part that I'm saying won't happen. Any commercial software use that Matlab toolkit?

I know some Physics PhDs in the lab I go to lectures in use GPGPU, but they use it on one-off simulations that only run on their lab computer, and they have to be PhDs to even design and operate it (and it takes the length of a PhD to do so). That's not practical.

I think it was Kaspersky although after a bit more reading they used CUDA so I'm not sure if it's implemented in their products.

I mentioned Matlab because doesn't that technically count as commercial software? Coding in Matlab isn't really coding in the traditional sense.

Anyway, I googled "GPU acceleration software" and found:

http://musemage.com/

http://www.adobe.com/products/speedgrade/features.html

http://www.sonycreativesoftware.com/vegaspro/gpuacceleration

http://www.cyberlink.com/products/mediaespresso/gpu-optimization_en_GB.html?&r=1

There's also the password cracking programs that Zarx mentioned earlier. So for a growing number of applications, GPUs are being utilised.



BlueFalcon said:
Soleron said:
1) Commercial software products. As in, something that takes advantage of that GPGPU functionality. Hardware support means nothing without actual uses. For hardware, yes every PC graphics card since 2007 and all three next game consoles support it. However I expect exactly zero games will use it.

1) Modern uses already exist in games

2) You are relegating GPGPU only to accelerating CPU-related functions but its flexible enough to be used for graphical purpose acceleration that's faster than traidtional means. That brings me to point #3 and the rest of my post:

3) DirectCompute = GPGPU for graphics

For example, to perform tessellation, we need Geometry units/shaders that are dedicated for this task. To shader textures, we have Texture Mapping Units (TMUs). Now imagine creating a GPU that is built from the ground-up so that it can perform a wide variety of tasks - in other words it's flexible enough to have some kind of units that can perform General-Purpose Computing for Graphical Processing. For simplicity, it can just be called a Compute Unit. It's a general purpose unit that can perform some CPU-related functions but because it's so advanced, it can also perform certain graphical functions faster because it can schedule the workload more dynamically with a Dynamic Compute Scheduler. 

GPGPU isn't necessarily using a GPU for physics only (like PhysX of Nvidia). It just means taking a GPU and running general-purpose code, which can be actual graphics effects we already have, but taking it to the next level that is too slow using non-compute units or alternatively minimizing the performance hit of these advanced graphical features compared to older architectures that are less GPGPU focused. What you aren't realizing is modern AMD architecture (Graphics Core Next or GCN) has Stream processors (that's your 2048 in 7970), Texture Units (that's your 128), ROPs (that's your 32), Geometry Engines and Compute Units (32 of those) with a dedicated dynamic scheduler. The first 4 are not really different from other architectures but the Compute Units + Dynamic scheduler is what's driving GPGPU to the next level here.  The dynamic scheduler in a GPGPU-focused architecture can use compute units to perform all kinds of graphical calculations way faster than the traditional architectures. While RV7xx in Wii U can do GPGPU, it's too primitive and most importantly the Compute part is completely unique to HD7000 that wasn't in HD4000-6000 series. If GCN finds its way to PS4/Xbox 720, it could easily be used to accelerate graphics. 

You are saying PC hardware cards since 2007 were ready for GPGPU, but this misses the point that NONE of them can use DirectCompute features well because they lack a proper dedicated dynamic scheduler (this is what issues the work to the compute units to perform general purpose computing via DirectCompute) and dedicated compute units  that schedules the execution of these functions effectively. Since they have no compute units, they can't do DirectCompute and thus are much more inefficient in games. 

You say you expect zero games to use it, but this is already not true as some modern games are already using GPGPU via Compute units/shaders, like Dirt Showdown, Sleeping Dogs, Hitman Absolution, Sniper Elite V2. Instead of using traditional graphics architecture units, modern game engines can accelerate graphics performance using "GPGPU" functionality of Compute units. Right now these are effectively used for Global illumination, dynamic lighting, and contact hardening soft shadows, accelerating ambient occlusion performance, post-processing, depth of field and Anti-aliasing!! Wow, all these should sound familiar to any gamer :) 

"In short this allows all lights in the scene to be truly dynamic lights, rather than just the age old hack of rendering 2D glows. This is achieved by building global lists of all lights in the scene, and then using DirectCompute to produce a culled light list for tiled regions of the screen. During the actual Pixel Shader lighting phase, only the culled light list for a given pixel is processed. This makes it possible to have thousands of dynamic lights in a scene and still achieve playable frame rates."

 

Source: http://blogs.amd.com/play/2012/07/03/dirt-showdown-amd-benchmark-guide/

Global Illumination Off

 


 

Global Illumination ON

 

 

No soft shadows

 

With soft shadows

 

You can also use Compute function of GPGPU to accelerate High Definition Ambient Occlusion (HDAO) 

HDAO Low quality

 

HDAO High Quality

 

Now watch what happens when combine all these graphics effects at once (Global Illiumation + HDAO + soft shadows = all accelerated using GPGPU function of the new Compute units of modern graphics cards). Take old architecture that has poor GPGPU functionality like GTX680 or HD6970 (VLIW-4) and compare it against a more modern architecture like Graphics Core Next that was made from the ground-up for GPGPU and has 28-32 dedicated Compute units:

 

Notice only HD7000 series can achieve > 30 fps with advanced lighting and older GPU architectures cannot. While you are right that GPGPU cannot solve everything and cannot substitute for some complex calculations where you still need a powerful CPU (AI of NPCs/crowds, real time physics calculations, etc.), you missed an entire aspect of how GPGPU can be used for games -- developers can now use Compute shaders (or compute units if you will) using DirectCompute to accelerate graphicsa effects like lighting and shadows and thus make games more realistic without it being a slideshow.

You can use these general purpose compute shaders to accelerate all kinds of graphics effects in games. 

Depth of Field in Sniper Elite V2

 

"In photography or cinematography the camera can only focus on one point in distance. All objects in the scene will appear to be more or less blurry depending on their distance to this focus point and the physical characteristics of the lens setup. Since optics (e.g. using a sniper scope or binoculars) is a major game play element in Sniper Elite V2 this effect is simulated by applying a DirectCompute accelerated post process to the image." 

 

Source: http://blogs.amd.com/play/2012/06/25/sniper-elite-v2-amd-benchmark-guide/

 

You again end up with much faster performance in a GPGPU-focused architecture that can use DirectCompute well like HD7000 compared to less advanced GPGPU architectures like GTX580/680/HD6970:

 

There is yet another game that already uses GPGPU via DirectCompute -- Sleeping Dogs.

 

"Sleeping Dogs uses the HDAO method, which is renowned for the quality and accuracy of its lighting simulation. And in practice, HDAO enables gamers to see shadows in cracks, corners and crevices where the shadows you’d expect in real life would not otherwise appear in a game. However, this renown comes at a price: performance. Thankfully, we were able to mitigate the performance penalty by migrating the lighting computation to DirectCompute, giving the breathtaking compute capabilities of the GCN Architecture a runway to strut its stuff." 

 

Source: http://reader.mreotech.com/sleeping-dogs-gaming-evolved-and-you-2/

Once again, the performance between GPGPU focused HD7000 series and outdated GPGPU architectures is night and day:

 

 

There is another game that uses GPU compute for graphics -- Hitman Absolution.

Real-Time Global Illumination

 

 "Global illumination, or GI, is a relative newcomer in the world of DirectX® 11 graphics. GI is designed to simulate the way rays of light reflect, not just off of the first object they strike, but each successive object struck by that reflected ray. To achieve this effect, the engine renders a Reflective Shadow Map (RSM) of the scene, taken from the point of view of a light source. Using GPU compute (DirectCompute language), it populates the RSM with a list of angles from which that light can reflect off an object, and then uses the RSM to compute several bounces of that lighting across the objects in a scene. The result is a stunning improvement in the realism of the game’s lighting, which we think contributes quite a lot to a game’s atmosphere.

 

 

Global Illumination Disabled

 

Global Illumination Enabled and accelerated via GPU compute function:

 

Source: http://blogs.amd.com/play/2012/11/20/hitman-absolution-in-depth/

Hitman Absolution uses Compute units to accelerate global illumination, depth of field and ambient occlusion. And once again, the performance hit is mitigated on a more modern GPGPU architecture compared to outdated architectures. You can see how a $280 GPU with modern GPGPU functions (HD7950) outperforms a $500 GPU without (GTX680)

 

 

I think now you see why you CAN use GPGPU for games to make them better looking:

1) Accelerate next generation graphical effects that mitigates the performance hit compared to less advanced GPGPU architectures;

2) GPGPU is not necessarily limited to physics or CPU-related general purpose code, but it can be used for graphical effects specifically (Hence the reason it's called general purpose). While the CPU is best for physics and AI, Compute shaders are WAYYYYYYYYYYYYYYYY faster for accelerating lighting, shadows, ambient occlusion using GPGPU focused architecture.

3) GCN (aka HD7000) is far more suitable for next generaton consoles than any other 2012 GPU. Unfortunately, Wii U uses outdated GPGPU since RV7xx is still VLIW-5, not GCN, which means out of the next gen console it's the most disadvantaged for GPGPU graphical acceleration.

Of course complex physics effects and AI are still best performed on the CPU but in theory you could create a general purpose code specifically for the Compute units if you have 100 million consoles and the development costs are actually worth it.


I don't think he's really talking about the graphics department though, GPGPU use for graphics accelration is not exactly new and the 7xxx series definitely rapes in that department and the advancements will come. He's prolly talking about general purpose computing that's not specialized and would be pointless to invest in if the CPU is already powerful enough to handle the tasks at hand.

One thing that pisses me off is that I don't know what the Wii U GPU really is though, I don't really care about console power but I really want to know what's in that thing, that's my question of the year.