By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - What do you think will happen if Sony embrace the GPGPU architecture like Nintendo for next gen?

GPGPU is not used for games... can be used only for Physics and some post filters... so move on.



Around the Network
sunK1D said:
VGKing said:
Rumors say that Microsoft is using an APU for next Xbox. Isn't that basically the same as GPGPU where the embedded GPU works to take a load off the CPU?

 

Only way GPGPU can be relevant when dealing with games(Graphics intensive apps) is if you were to have a separate GPU working with your main GPU. This is why APUs are so appealing. They offer a low cost companion GPU capable of handling CPU tasks which would otherwise decrease throughput from the main GPU if it was to handle GPGPU tasks.

The Wii U offers a traditional CPU + GPU architeture. Although the GPU is capable of direct computing, it would take a massive hit in raw rendering power if it was to handle CPU tasks as well as render graphical tasks at the same time. A better approach is to have a faster CPU and GPU, which is how the XBOX360 does it.

A more "modern" approach is to have a multi-core fast clocked APU and a main GPU. The CPUs in the APU will handle general tasks whereas the GPU in the APU can handle direct compute tasks and the main GPU is free to render all that next gen eye-candy. This is most likely the X720 and PS4 architecture.

Will next gen games be portable to the Wii U? defnitely, but the end result will be less than desirable.

 

Ok thank you seem to explain it the best :D



fillet said:
Roma said:
fillet said:
Roma said:
GPGPUs are the future of GPUs so all next gen consoles will use it or at least I think it is the future


You mean you're not sure what you're talking about?

Nobody else in this thread apart from Soleron does.

Everyone posting here is saying it will be good and not one you even knows why.

The reason you don't know why, is because it won't be used, if it would, you'd have heard HOW it will be used by now.

smoke mirrors, BULLSHIT! = GPGPU (in context of video games)

that's why the "I think" part of the text is there


:)

Sorry mate, it wasn't really a dig at you. It's more about the misunderstanding and just how widespread it goes. Most people genuinely have been made to believe that "GPGPU" is some kind of secret weapon for offloading CPU intensive work, it really isn't.....for gaming at least. As Soleron has pointed out in a few posts and in a few different threads, it's for very specific kinds of CPU intensive tasks that can be run in parallel.

GPGPU is great for running simulations, hell it's used now in the most powerful super computer in the world for weather simulations consisting of thousands of nVidia Tesla GPUs, same architecure as the GTX680. But that's where it ends, it has very specific use - parallel computing, it's hard to program for and isn't for mainstream exploitation.

The only real use at the moment as far as gaming is concerned is for physics calculations, i.e eye candy or icing on the cake. It certainly won't open up doors to new types of games, new types of effects, it's basically limited to fancy destruction of buildings and that kind of thing. It's been used for years in PC gaming and all that's come of it is physics, nothing more, nothing less. It's possible to counter with an argument that the reason for this is because developers need a target that all users have, so making a game that integrally uses the GPGPU functions would mean the game wouldn't even work on non GPGPU graphics cards. This isn't true though, we're at a point where system requirements to play the top games are high enough to the point that the low-mid PC has been excluded for years already. GPGPU functions aren't a big part of DirectX - which says it all really.

If GPGPU is the wonder boy of modern gaming, and has been available for 5 years or so now on the PC, why haven't we seen a revolution in that area of gaming? Why are we still paying X hundred pounds/dollars for graphics cards when a crappy old low end card can do the same thing with GPGPU? The answer is because - It can't. GPUGPU is not the shining star people make it out as.

It won't be able to augment the poor GPU in the Wii-U, or the slow CPU in it either.

Think of the Wii-U as a console that makes a cake that looks fantastic on top with lots of nice icing but the actual content of the cake is basically fluff. I mean not insult by this, it's just technically how it is with a very poor anaology I'll admit :p

The Wii-U is in for one hell of a rocky ride once the next gen consoles come out, the ports will be dreadful at first, then they will simply stop coming altogether or be actual different versions of the game that the next gen consoles get. In exactly the same vein as the versions of games the PSP got Vs the Xbox 360/PS3. Of course this goes with the assumption that the next gen consoles will be a lot more powerful than the Wii-U. But consiering the Wii-U is arguably not even as powerful as the Xbox 360/PS3 that's a pretty darn fair assumption to make.

Things don't look good when you actually look at this from a technical perspective and take hope and benefit of the doubt out of the equation, what we know looks bad plain and simple.

GPGPU = nothing to see here.

Wii-U = going to get some top notch quality Nintendo 1st party titles and poor ports of next gen consoles for a year or so, followed by bad watered down versions that aren't ports, followed by 3rd parties dropping support.

That's the most likely outlook at the moment based on the information we currently have to hand. I know moderators frown upon slagging off a console with no merrit but I believe I've explained myself here well enough to say what I'm saying with reasoning.

This sounds like an "opinion", without meaning to sound up my own rear end, it isn't. It's just how it is more or less. I'm not that technically adept in these areas but I know a fair bit and taking this as just one person's opinion will lead you to disappointement.

Ask yourself this, how many times have gamers given the benefit of the doubt about a technology, only to be shown that their doubt was well founded. Time and time again people build up this "hope" for new technology that doesn't justify itself and realies on "belief" - Technology isn't about belief and hope, it's about numbers that can be counted.

This is one of those technologies. Let it go, move on and forget about GPGPU.

(except for getting excited about buildings coming crumbling down into millions of little pieces, if you like that sort of thing, then you're in for a treat. I expect it could be used by Ninendo for some quirky 1st party games, but it will only be gimmicky stuff, not integral gameplay stuff, it won't make stuff look great, it's about geometry and movement/physics, not how a game plays/massive worlds and so forth).

GPGPU is only a tool for developers to use if they need additional computational power. Cell is based off similar concepts and it has been shown, that developers can use this tool to reach a higher level in game graphics experience.

In gaming realm, computation power is needed when it comes to graphics and physics. For the longest time, people got by computation limitations by using graphics tricks and illusions to give a false sense of rich space. Now with more powerful GPU and GPGPU, computation power has caught up to the point that developers can start creating true-rich space where we get real geometry collisions and real time-rendered backgrounds and environements.

You're right, for the most part GPGPU won't change the gaming scene as we, gamers, don't appericate these physics/graphics improvements enough to make us buy a console/game over another but that is not a reason to write off this tool. Maybe developers in the future can make creative use of this tool for something beyond just graphics.



Wlakiz said:
fillet said:
Roma said:
fillet said:
Roma said:
GPGPUs are the future of GPUs so all next gen consoles will use it or at least I think it is the future


You mean you're not sure what you're talking about?

Nobody else in this thread apart from Soleron does.

Everyone posting here is saying it will be good and not one you even knows why.

The reason you don't know why, is because it won't be used, if it would, you'd have heard HOW it will be used by now.

smoke mirrors, BULLSHIT! = GPGPU (in context of video games)

that's why the "I think" part of the text is there


:)

Sorry mate, it wasn't really a dig at you. It's more about the misunderstanding and just how widespread it goes. Most people genuinely have been made to believe that "GPGPU" is some kind of secret weapon for offloading CPU intensive work, it really isn't.....for gaming at least. As Soleron has pointed out in a few posts and in a few different threads, it's for very specific kinds of CPU intensive tasks that can be run in parallel.

GPGPU is great for running simulations, hell it's used now in the most powerful super computer in the world for weather simulations consisting of thousands of nVidia Tesla GPUs, same architecure as the GTX680. But that's where it ends, it has very specific use - parallel computing, it's hard to program for and isn't for mainstream exploitation.

The only real use at the moment as far as gaming is concerned is for physics calculations, i.e eye candy or icing on the cake. It certainly won't open up doors to new types of games, new types of effects, it's basically limited to fancy destruction of buildings and that kind of thing. It's been used for years in PC gaming and all that's come of it is physics, nothing more, nothing less. It's possible to counter with an argument that the reason for this is because developers need a target that all users have, so making a game that integrally uses the GPGPU functions would mean the game wouldn't even work on non GPGPU graphics cards. This isn't true though, we're at a point where system requirements to play the top games are high enough to the point that the low-mid PC has been excluded for years already. GPGPU functions aren't a big part of DirectX - which says it all really.

If GPGPU is the wonder boy of modern gaming, and has been available for 5 years or so now on the PC, why haven't we seen a revolution in that area of gaming? Why are we still paying X hundred pounds/dollars for graphics cards when a crappy old low end card can do the same thing with GPGPU? The answer is because - It can't. GPUGPU is not the shining star people make it out as.

It won't be able to augment the poor GPU in the Wii-U, or the slow CPU in it either.

Think of the Wii-U as a console that makes a cake that looks fantastic on top with lots of nice icing but the actual content of the cake is basically fluff. I mean not insult by this, it's just technically how it is with a very poor anaology I'll admit :p

The Wii-U is in for one hell of a rocky ride once the next gen consoles come out, the ports will be dreadful at first, then they will simply stop coming altogether or be actual different versions of the game that the next gen consoles get. In exactly the same vein as the versions of games the PSP got Vs the Xbox 360/PS3. Of course this goes with the assumption that the next gen consoles will be a lot more powerful than the Wii-U. But consiering the Wii-U is arguably not even as powerful as the Xbox 360/PS3 that's a pretty darn fair assumption to make.

Things don't look good when you actually look at this from a technical perspective and take hope and benefit of the doubt out of the equation, what we know looks bad plain and simple.

GPGPU = nothing to see here.

Wii-U = going to get some top notch quality Nintendo 1st party titles and poor ports of next gen consoles for a year or so, followed by bad watered down versions that aren't ports, followed by 3rd parties dropping support.

That's the most likely outlook at the moment based on the information we currently have to hand. I know moderators frown upon slagging off a console with no merrit but I believe I've explained myself here well enough to say what I'm saying with reasoning.

This sounds like an "opinion", without meaning to sound up my own rear end, it isn't. It's just how it is more or less. I'm not that technically adept in these areas but I know a fair bit and taking this as just one person's opinion will lead you to disappointement.

Ask yourself this, how many times have gamers given the benefit of the doubt about a technology, only to be shown that their doubt was well founded. Time and time again people build up this "hope" for new technology that doesn't justify itself and realies on "belief" - Technology isn't about belief and hope, it's about numbers that can be counted.

This is one of those technologies. Let it go, move on and forget about GPGPU.

(except for getting excited about buildings coming crumbling down into millions of little pieces, if you like that sort of thing, then you're in for a treat. I expect it could be used by Ninendo for some quirky 1st party games, but it will only be gimmicky stuff, not integral gameplay stuff, it won't make stuff look great, it's about geometry and movement/physics, not how a game plays/massive worlds and so forth).

GPGPU is only a tool for developers to use if they need additional computational power. Cell is based off similar concepts and it has been shown, that developers can use this tool to reach a higher level in game graphics experience.

In gaming realm, computation power is needed when it comes to graphics and physics. For the longest time, people got by computation limitations by using graphics tricks and illusions to give a false sense of rich space. Now with more powerful GPU and GPGPU, computation power has caught up to the point that developers can start creating true-rich space where we get real geometry collisions and real time-rendered backgrounds and environements.

You're right, for the most part GPGPU won't change the gaming scene as we, gamers, don't appericate these physics/graphics improvements enough to make us buy a console/game over another but that is not a reason to write off this tool. Maybe developers in the future can make creative use of this tool for something beyond just graphics.

My post was designed to answer posts like yours, then you go and quote me? ;)

What gives? Your last paragraph is correct and exactly what I said already as a best case scenario.

Comparing GPGPU to the Cell is ridiculous, they have nothing in common apart from being hard to program for on a practical level admittedly the modular style of their processing could be called a similarity, but that's just a laymans view. You could say the sun and the moon are the same because they're both ROUND.

 

Obviously, they aren't.



fillet said:
Wlakiz said:
fillet said:
Roma said:
fillet said:
Roma said:
GPGPUs are the future of GPUs so all next gen consoles will use it or at least I think it is the future


You mean you're not sure what you're talking about?

Nobody else in this thread apart from Soleron does.

Everyone posting here is saying it will be good and not one you even knows why.

The reason you don't know why, is because it won't be used, if it would, you'd have heard HOW it will be used by now.

smoke mirrors, BULLSHIT! = GPGPU (in context of video games)

that's why the "I think" part of the text is there


:)

Sorry mate, it wasn't really a dig at you. It's more about the misunderstanding and just how widespread it goes. Most people genuinely have been made to believe that "GPGPU" is some kind of secret weapon for offloading CPU intensive work, it really isn't.....for gaming at least. As Soleron has pointed out in a few posts and in a few different threads, it's for very specific kinds of CPU intensive tasks that can be run in parallel.

GPGPU is great for running simulations, hell it's used now in the most powerful super computer in the world for weather simulations consisting of thousands of nVidia Tesla GPUs, same architecure as the GTX680. But that's where it ends, it has very specific use - parallel computing, it's hard to program for and isn't for mainstream exploitation.

The only real use at the moment as far as gaming is concerned is for physics calculations, i.e eye candy or icing on the cake. It certainly won't open up doors to new types of games, new types of effects, it's basically limited to fancy destruction of buildings and that kind of thing. It's been used for years in PC gaming and all that's come of it is physics, nothing more, nothing less. It's possible to counter with an argument that the reason for this is because developers need a target that all users have, so making a game that integrally uses the GPGPU functions would mean the game wouldn't even work on non GPGPU graphics cards. This isn't true though, we're at a point where system requirements to play the top games are high enough to the point that the low-mid PC has been excluded for years already. GPGPU functions aren't a big part of DirectX - which says it all really.

If GPGPU is the wonder boy of modern gaming, and has been available for 5 years or so now on the PC, why haven't we seen a revolution in that area of gaming? Why are we still paying X hundred pounds/dollars for graphics cards when a crappy old low end card can do the same thing with GPGPU? The answer is because - It can't. GPUGPU is not the shining star people make it out as.

It won't be able to augment the poor GPU in the Wii-U, or the slow CPU in it either.

Think of the Wii-U as a console that makes a cake that looks fantastic on top with lots of nice icing but the actual content of the cake is basically fluff. I mean not insult by this, it's just technically how it is with a very poor anaology I'll admit :p

The Wii-U is in for one hell of a rocky ride once the next gen consoles come out, the ports will be dreadful at first, then they will simply stop coming altogether or be actual different versions of the game that the next gen consoles get. In exactly the same vein as the versions of games the PSP got Vs the Xbox 360/PS3. Of course this goes with the assumption that the next gen consoles will be a lot more powerful than the Wii-U. But consiering the Wii-U is arguably not even as powerful as the Xbox 360/PS3 that's a pretty darn fair assumption to make.

Things don't look good when you actually look at this from a technical perspective and take hope and benefit of the doubt out of the equation, what we know looks bad plain and simple.

GPGPU = nothing to see here.

Wii-U = going to get some top notch quality Nintendo 1st party titles and poor ports of next gen consoles for a year or so, followed by bad watered down versions that aren't ports, followed by 3rd parties dropping support.

That's the most likely outlook at the moment based on the information we currently have to hand. I know moderators frown upon slagging off a console with no merrit but I believe I've explained myself here well enough to say what I'm saying with reasoning.

This sounds like an "opinion", without meaning to sound up my own rear end, it isn't. It's just how it is more or less. I'm not that technically adept in these areas but I know a fair bit and taking this as just one person's opinion will lead you to disappointement.

Ask yourself this, how many times have gamers given the benefit of the doubt about a technology, only to be shown that their doubt was well founded. Time and time again people build up this "hope" for new technology that doesn't justify itself and realies on "belief" - Technology isn't about belief and hope, it's about numbers that can be counted.

This is one of those technologies. Let it go, move on and forget about GPGPU.

(except for getting excited about buildings coming crumbling down into millions of little pieces, if you like that sort of thing, then you're in for a treat. I expect it could be used by Ninendo for some quirky 1st party games, but it will only be gimmicky stuff, not integral gameplay stuff, it won't make stuff look great, it's about geometry and movement/physics, not how a game plays/massive worlds and so forth).

GPGPU is only a tool for developers to use if they need additional computational power. Cell is based off similar concepts and it has been shown, that developers can use this tool to reach a higher level in game graphics experience.

In gaming realm, computation power is needed when it comes to graphics and physics. For the longest time, people got by computation limitations by using graphics tricks and illusions to give a false sense of rich space. Now with more powerful GPU and GPGPU, computation power has caught up to the point that developers can start creating true-rich space where we get real geometry collisions and real time-rendered backgrounds and environements.

You're right, for the most part GPGPU won't change the gaming scene as we, gamers, don't appericate these physics/graphics improvements enough to make us buy a console/game over another but that is not a reason to write off this tool. Maybe developers in the future can make creative use of this tool for something beyond just graphics.

My post was designed to answer posts like yours, then you go and quote me? ;)

What gives? Your last paragraph is correct and exactly what I said already as a best case scenario.

Comparing GPGPU to the Cell is ridiculous, they have nothing in common apart from being hard to program for on a practical level admittedly the modular style of their processing could be called a similarity, but that's just a laymans view. You could say the sun and the moon are the same because they're both ROUND.

 

Obviously, they aren't.

^_-... Do you know why the cell and GPGPU are difficult to program? Its because they face the same parallel programing challenges. They are architecturally designed in a similar way, in terms of memory streaming and instruction execution. The proper analogy between those two is a planet and a moon, both are body of mass that revolve  around a greater mass which is not a ridiculous comparison.



Around the Network
Wlakiz said:
fillet said:
Wlakiz said:
fillet said:
Roma said:
fillet said:
Roma said:
GPGPUs are the future of GPUs so all next gen consoles will use it or at least I think it is the future


You mean you're not sure what you're talking about?

Nobody else in this thread apart from Soleron does.

Everyone posting here is saying it will be good and not one you even knows why.

The reason you don't know why, is because it won't be used, if it would, you'd have heard HOW it will be used by now.

smoke mirrors, BULLSHIT! = GPGPU (in context of video games)

that's why the "I think" part of the text is there


:)

Sorry mate, it wasn't really a dig at you. It's more about the misunderstanding and just how widespread it goes. Most people genuinely have been made to believe that "GPGPU" is some kind of secret weapon for offloading CPU intensive work, it really isn't.....for gaming at least. As Soleron has pointed out in a few posts and in a few different threads, it's for very specific kinds of CPU intensive tasks that can be run in parallel.

GPGPU is great for running simulations, hell it's used now in the most powerful super computer in the world for weather simulations consisting of thousands of nVidia Tesla GPUs, same architecure as the GTX680. But that's where it ends, it has very specific use - parallel computing, it's hard to program for and isn't for mainstream exploitation.

The only real use at the moment as far as gaming is concerned is for physics calculations, i.e eye candy or icing on the cake. It certainly won't open up doors to new types of games, new types of effects, it's basically limited to fancy destruction of buildings and that kind of thing. It's been used for years in PC gaming and all that's come of it is physics, nothing more, nothing less. It's possible to counter with an argument that the reason for this is because developers need a target that all users have, so making a game that integrally uses the GPGPU functions would mean the game wouldn't even work on non GPGPU graphics cards. This isn't true though, we're at a point where system requirements to play the top games are high enough to the point that the low-mid PC has been excluded for years already. GPGPU functions aren't a big part of DirectX - which says it all really.

If GPGPU is the wonder boy of modern gaming, and has been available for 5 years or so now on the PC, why haven't we seen a revolution in that area of gaming? Why are we still paying X hundred pounds/dollars for graphics cards when a crappy old low end card can do the same thing with GPGPU? The answer is because - It can't. GPUGPU is not the shining star people make it out as.

It won't be able to augment the poor GPU in the Wii-U, or the slow CPU in it either.

Think of the Wii-U as a console that makes a cake that looks fantastic on top with lots of nice icing but the actual content of the cake is basically fluff. I mean not insult by this, it's just technically how it is with a very poor anaology I'll admit :p

The Wii-U is in for one hell of a rocky ride once the next gen consoles come out, the ports will be dreadful at first, then they will simply stop coming altogether or be actual different versions of the game that the next gen consoles get. In exactly the same vein as the versions of games the PSP got Vs the Xbox 360/PS3. Of course this goes with the assumption that the next gen consoles will be a lot more powerful than the Wii-U. But consiering the Wii-U is arguably not even as powerful as the Xbox 360/PS3 that's a pretty darn fair assumption to make.

Things don't look good when you actually look at this from a technical perspective and take hope and benefit of the doubt out of the equation, what we know looks bad plain and simple.

GPGPU = nothing to see here.

Wii-U = going to get some top notch quality Nintendo 1st party titles and poor ports of next gen consoles for a year or so, followed by bad watered down versions that aren't ports, followed by 3rd parties dropping support.

That's the most likely outlook at the moment based on the information we currently have to hand. I know moderators frown upon slagging off a console with no merrit but I believe I've explained myself here well enough to say what I'm saying with reasoning.

This sounds like an "opinion", without meaning to sound up my own rear end, it isn't. It's just how it is more or less. I'm not that technically adept in these areas but I know a fair bit and taking this as just one person's opinion will lead you to disappointement.

Ask yourself this, how many times have gamers given the benefit of the doubt about a technology, only to be shown that their doubt was well founded. Time and time again people build up this "hope" for new technology that doesn't justify itself and realies on "belief" - Technology isn't about belief and hope, it's about numbers that can be counted.

This is one of those technologies. Let it go, move on and forget about GPGPU.

(except for getting excited about buildings coming crumbling down into millions of little pieces, if you like that sort of thing, then you're in for a treat. I expect it could be used by Ninendo for some quirky 1st party games, but it will only be gimmicky stuff, not integral gameplay stuff, it won't make stuff look great, it's about geometry and movement/physics, not how a game plays/massive worlds and so forth).

GPGPU is only a tool for developers to use if they need additional computational power. Cell is based off similar concepts and it has been shown, that developers can use this tool to reach a higher level in game graphics experience.

In gaming realm, computation power is needed when it comes to graphics and physics. For the longest time, people got by computation limitations by using graphics tricks and illusions to give a false sense of rich space. Now with more powerful GPU and GPGPU, computation power has caught up to the point that developers can start creating true-rich space where we get real geometry collisions and real time-rendered backgrounds and environements.

You're right, for the most part GPGPU won't change the gaming scene as we, gamers, don't appericate these physics/graphics improvements enough to make us buy a console/game over another but that is not a reason to write off this tool. Maybe developers in the future can make creative use of this tool for something beyond just graphics.

My post was designed to answer posts like yours, then you go and quote me? ;)

What gives? Your last paragraph is correct and exactly what I said already as a best case scenario.

Comparing GPGPU to the Cell is ridiculous, they have nothing in common apart from being hard to program for on a practical level admittedly the modular style of their processing could be called a similarity, but that's just a laymans view. You could say the sun and the moon are the same because they're both ROUND.

 

Obviously, they aren't.

^_-... Do you know why the cell and GPGPU are difficult to program? Its because they face the same parallel programing challenges. They are architecturally designed in a similar way, in terms of memory streaming and instruction execution. The proper analogy between those two is a planet and a moon, both are body of mass that revolve  around a greater mass which is not a ridiculous comparison.

 

http://en.wikipedia.org/wiki/GPGPU#Applications

 

/Thread.

 

That's got nothing to do with the Cell, the Cell is just a normal CPU with an annoying number of threads and some esoteric ways of exploiting it's potential processing power. You can't seriously compare 7 threads to 100s of stream processors and say they are both "parallel processing".

There's parallel processing....then there's "massive" parallel processing.

Totally different things, in fact the only real similarity is that they both are "processing".

For this thread, they have no similarity whatsoever.



fillet said:
Wlakiz said:
fillet said:
Wlakiz said:
fillet said:
Roma said:
fillet said:
Roma said:
GPGPUs are the future of GPUs so all next gen consoles will use it or at least I think it is the future


You mean you're not sure what you're talking about?

Nobody else in this thread apart from Soleron does.

Everyone posting here is saying it will be good and not one you even knows why.

The reason you don't know why, is because it won't be used, if it would, you'd have heard HOW it will be used by now.

smoke mirrors, BULLSHIT! = GPGPU (in context of video games)

that's why the "I think" part of the text is there


:)

Sorry mate, it wasn't really a dig at you. It's more about the misunderstanding and just how widespread it goes. Most people genuinely have been made to believe that "GPGPU" is some kind of secret weapon for offloading CPU intensive work, it really isn't.....for gaming at least. As Soleron has pointed out in a few posts and in a few different threads, it's for very specific kinds of CPU intensive tasks that can be run in parallel.

GPGPU is great for running simulations, hell it's used now in the most powerful super computer in the world for weather simulations consisting of thousands of nVidia Tesla GPUs, same architecure as the GTX680. But that's where it ends, it has very specific use - parallel computing, it's hard to program for and isn't for mainstream exploitation.

The only real use at the moment as far as gaming is concerned is for physics calculations, i.e eye candy or icing on the cake. It certainly won't open up doors to new types of games, new types of effects, it's basically limited to fancy destruction of buildings and that kind of thing. It's been used for years in PC gaming and all that's come of it is physics, nothing more, nothing less. It's possible to counter with an argument that the reason for this is because developers need a target that all users have, so making a game that integrally uses the GPGPU functions would mean the game wouldn't even work on non GPGPU graphics cards. This isn't true though, we're at a point where system requirements to play the top games are high enough to the point that the low-mid PC has been excluded for years already. GPGPU functions aren't a big part of DirectX - which says it all really.

If GPGPU is the wonder boy of modern gaming, and has been available for 5 years or so now on the PC, why haven't we seen a revolution in that area of gaming? Why are we still paying X hundred pounds/dollars for graphics cards when a crappy old low end card can do the same thing with GPGPU? The answer is because - It can't. GPUGPU is not the shining star people make it out as.

It won't be able to augment the poor GPU in the Wii-U, or the slow CPU in it either.

Think of the Wii-U as a console that makes a cake that looks fantastic on top with lots of nice icing but the actual content of the cake is basically fluff. I mean not insult by this, it's just technically how it is with a very poor anaology I'll admit :p

The Wii-U is in for one hell of a rocky ride once the next gen consoles come out, the ports will be dreadful at first, then they will simply stop coming altogether or be actual different versions of the game that the next gen consoles get. In exactly the same vein as the versions of games the PSP got Vs the Xbox 360/PS3. Of course this goes with the assumption that the next gen consoles will be a lot more powerful than the Wii-U. But consiering the Wii-U is arguably not even as powerful as the Xbox 360/PS3 that's a pretty darn fair assumption to make.

Things don't look good when you actually look at this from a technical perspective and take hope and benefit of the doubt out of the equation, what we know looks bad plain and simple.

GPGPU = nothing to see here.

Wii-U = going to get some top notch quality Nintendo 1st party titles and poor ports of next gen consoles for a year or so, followed by bad watered down versions that aren't ports, followed by 3rd parties dropping support.

That's the most likely outlook at the moment based on the information we currently have to hand. I know moderators frown upon slagging off a console with no merrit but I believe I've explained myself here well enough to say what I'm saying with reasoning.

This sounds like an "opinion", without meaning to sound up my own rear end, it isn't. It's just how it is more or less. I'm not that technically adept in these areas but I know a fair bit and taking this as just one person's opinion will lead you to disappointement.

Ask yourself this, how many times have gamers given the benefit of the doubt about a technology, only to be shown that their doubt was well founded. Time and time again people build up this "hope" for new technology that doesn't justify itself and realies on "belief" - Technology isn't about belief and hope, it's about numbers that can be counted.

This is one of those technologies. Let it go, move on and forget about GPGPU.

(except for getting excited about buildings coming crumbling down into millions of little pieces, if you like that sort of thing, then you're in for a treat. I expect it could be used by Ninendo for some quirky 1st party games, but it will only be gimmicky stuff, not integral gameplay stuff, it won't make stuff look great, it's about geometry and movement/physics, not how a game plays/massive worlds and so forth).

GPGPU is only a tool for developers to use if they need additional computational power. Cell is based off similar concepts and it has been shown, that developers can use this tool to reach a higher level in game graphics experience.

In gaming realm, computation power is needed when it comes to graphics and physics. For the longest time, people got by computation limitations by using graphics tricks and illusions to give a false sense of rich space. Now with more powerful GPU and GPGPU, computation power has caught up to the point that developers can start creating true-rich space where we get real geometry collisions and real time-rendered backgrounds and environements.

You're right, for the most part GPGPU won't change the gaming scene as we, gamers, don't appericate these physics/graphics improvements enough to make us buy a console/game over another but that is not a reason to write off this tool. Maybe developers in the future can make creative use of this tool for something beyond just graphics.

My post was designed to answer posts like yours, then you go and quote me? ;)

What gives? Your last paragraph is correct and exactly what I said already as a best case scenario.

Comparing GPGPU to the Cell is ridiculous, they have nothing in common apart from being hard to program for on a practical level admittedly the modular style of their processing could be called a similarity, but that's just a laymans view. You could say the sun and the moon are the same because they're both ROUND.

 

Obviously, they aren't.

^_-... Do you know why the cell and GPGPU are difficult to program? Its because they face the same parallel programing challenges. They are architecturally designed in a similar way, in terms of memory streaming and instruction execution. The proper analogy between those two is a planet and a moon, both are body of mass that revolve  around a greater mass which is not a ridiculous comparison.

 

http://en.wikipedia.org/wiki/GPGPU#Applications

 

/Thread.

 

That's got nothing to do with the Cell, the Cell is just a normal CPU with an annoying number of threads and some esoteric ways of exploiting it's potential processing power. You can't seriously compare 7 threads to 100s of stream processors and say they are both "parallel processing".

There's parallel processing....then there's "massive" parallel processing.

Totally different things, in fact the only real similarity is that they both are "processing".

For this thread, they have no similarity whatsoever.

Are you serious? How does giving me a link to what GPGPU computation been exploited to aid even remotely relate to cell/GPGPU comparison? 

Are you trying to imply that CELL can't do FFT/Cryptography/Video Processing? The fact you stated "compare 7 threads to 100s of stream processors" just tells me that you have no idea what you're talking about.  

Heres how you use CELL:

Copy memory to SPE local store -> execute threads (FIFO) -> Copy Data back to main memory

Heres how you use GPGPU:

Copy memory to Local/texture/shared memory -> Execute Warps -> Copy data from Local/Texture/Shared memory back to main memory

See the similarities? Obviously I left out more complex techniques like Interrupts and piplining from the SPE, but the concept of using CELL's SPE  and GPGPU to offload computation intensive operation is essentially the same.



Wlakiz said:
fillet said:
Wlakiz said:
fillet said:
Wlakiz said:
fillet said:
Roma said:
fillet said:
Roma said:
GPGPUs are the future of GPUs so all next gen consoles will use it or at least I think it is the future


You mean you're not sure what you're talking about?

Nobody else in this thread apart from Soleron does.

Everyone posting here is saying it will be good and not one you even knows why.

The reason you don't know why, is because it won't be used, if it would, you'd have heard HOW it will be used by now.

smoke mirrors, BULLSHIT! = GPGPU (in context of video games)

that's why the "I think" part of the text is there


:)

Sorry mate, it wasn't really a dig at you. It's more about the misunderstanding and just how widespread it goes. Most people genuinely have been made to believe that "GPGPU" is some kind of secret weapon for offloading CPU intensive work, it really isn't.....for gaming at least. As Soleron has pointed out in a few posts and in a few different threads, it's for very specific kinds of CPU intensive tasks that can be run in parallel.

GPGPU is great for running simulations, hell it's used now in the most powerful super computer in the world for weather simulations consisting of thousands of nVidia Tesla GPUs, same architecure as the GTX680. But that's where it ends, it has very specific use - parallel computing, it's hard to program for and isn't for mainstream exploitation.

The only real use at the moment as far as gaming is concerned is for physics calculations, i.e eye candy or icing on the cake. It certainly won't open up doors to new types of games, new types of effects, it's basically limited to fancy destruction of buildings and that kind of thing. It's been used for years in PC gaming and all that's come of it is physics, nothing more, nothing less. It's possible to counter with an argument that the reason for this is because developers need a target that all users have, so making a game that integrally uses the GPGPU functions would mean the game wouldn't even work on non GPGPU graphics cards. This isn't true though, we're at a point where system requirements to play the top games are high enough to the point that the low-mid PC has been excluded for years already. GPGPU functions aren't a big part of DirectX - which says it all really.

If GPGPU is the wonder boy of modern gaming, and has been available for 5 years or so now on the PC, why haven't we seen a revolution in that area of gaming? Why are we still paying X hundred pounds/dollars for graphics cards when a crappy old low end card can do the same thing with GPGPU? The answer is because - It can't. GPUGPU is not the shining star people make it out as.

It won't be able to augment the poor GPU in the Wii-U, or the slow CPU in it either.

Think of the Wii-U as a console that makes a cake that looks fantastic on top with lots of nice icing but the actual content of the cake is basically fluff. I mean not insult by this, it's just technically how it is with a very poor anaology I'll admit :p

The Wii-U is in for one hell of a rocky ride once the next gen consoles come out, the ports will be dreadful at first, then they will simply stop coming altogether or be actual different versions of the game that the next gen consoles get. In exactly the same vein as the versions of games the PSP got Vs the Xbox 360/PS3. Of course this goes with the assumption that the next gen consoles will be a lot more powerful than the Wii-U. But consiering the Wii-U is arguably not even as powerful as the Xbox 360/PS3 that's a pretty darn fair assumption to make.

Things don't look good when you actually look at this from a technical perspective and take hope and benefit of the doubt out of the equation, what we know looks bad plain and simple.

GPGPU = nothing to see here.

Wii-U = going to get some top notch quality Nintendo 1st party titles and poor ports of next gen consoles for a year or so, followed by bad watered down versions that aren't ports, followed by 3rd parties dropping support.

That's the most likely outlook at the moment based on the information we currently have to hand. I know moderators frown upon slagging off a console with no merrit but I believe I've explained myself here well enough to say what I'm saying with reasoning.

This sounds like an "opinion", without meaning to sound up my own rear end, it isn't. It's just how it is more or less. I'm not that technically adept in these areas but I know a fair bit and taking this as just one person's opinion will lead you to disappointement.

Ask yourself this, how many times have gamers given the benefit of the doubt about a technology, only to be shown that their doubt was well founded. Time and time again people build up this "hope" for new technology that doesn't justify itself and realies on "belief" - Technology isn't about belief and hope, it's about numbers that can be counted.

This is one of those technologies. Let it go, move on and forget about GPGPU.

(except for getting excited about buildings coming crumbling down into millions of little pieces, if you like that sort of thing, then you're in for a treat. I expect it could be used by Ninendo for some quirky 1st party games, but it will only be gimmicky stuff, not integral gameplay stuff, it won't make stuff look great, it's about geometry and movement/physics, not how a game plays/massive worlds and so forth).

GPGPU is only a tool for developers to use if they need additional computational power. Cell is based off similar concepts and it has been shown, that developers can use this tool to reach a higher level in game graphics experience.

In gaming realm, computation power is needed when it comes to graphics and physics. For the longest time, people got by computation limitations by using graphics tricks and illusions to give a false sense of rich space. Now with more powerful GPU and GPGPU, computation power has caught up to the point that developers can start creating true-rich space where we get real geometry collisions and real time-rendered backgrounds and environements.

You're right, for the most part GPGPU won't change the gaming scene as we, gamers, don't appericate these physics/graphics improvements enough to make us buy a console/game over another but that is not a reason to write off this tool. Maybe developers in the future can make creative use of this tool for something beyond just graphics.

My post was designed to answer posts like yours, then you go and quote me? ;)

What gives? Your last paragraph is correct and exactly what I said already as a best case scenario.

Comparing GPGPU to the Cell is ridiculous, they have nothing in common apart from being hard to program for on a practical level admittedly the modular style of their processing could be called a similarity, but that's just a laymans view. You could say the sun and the moon are the same because they're both ROUND.

 

Obviously, they aren't.

^_-... Do you know why the cell and GPGPU are difficult to program? Its because they face the same parallel programing challenges. They are architecturally designed in a similar way, in terms of memory streaming and instruction execution. The proper analogy between those two is a planet and a moon, both are body of mass that revolve  around a greater mass which is not a ridiculous comparison.

 

http://en.wikipedia.org/wiki/GPGPU#Applications

 

/Thread.

 

That's got nothing to do with the Cell, the Cell is just a normal CPU with an annoying number of threads and some esoteric ways of exploiting it's potential processing power. You can't seriously compare 7 threads to 100s of stream processors and say they are both "parallel processing".

There's parallel processing....then there's "massive" parallel processing.

Totally different things, in fact the only real similarity is that they both are "processing".

For this thread, they have no similarity whatsoever.

Are you serious? How does giving me a link to what GPGPU computation been exploited to aid even remotely relate to cell/GPGPU comparison? 

Are you trying to imply that CELL can't do FFT/Cryptography/Video Processing? The fact you stated "compare 7 threads to 100s of stream processors" just tells me that you have no idea what you're talking about.  

Heres how you use CELL:

Copy memory to SPE local store -> execute threads (FIFO) -> Copy Data back to main memory

Heres how you use GPGPU:

Copy memory to Local/texture/shared memory -> Execute Warps -> Copy data from Local/Texture/Shared memory back to main memory

See the similarities? Obviously I left out more complex techniques like Interrupts and piplining from the SPE, but the concept of using CELL's SPE  and GPGPU to offload computation intensive operation is essentially the same.

The point is that each stream processor in a GPGPU is much less powerful, and when you have code that depends on other code having been processed already it limits the use of the stream processors, I'm not saying the 2 are different in terms of functionality, I'm saying they're different in terms of their useful application in terms of processing in video games for anything other than physics.

You can't just split the jobs up.

My understanding is clearly not at your level, I don't know about the technical terms but I do know about the technical limitations of stream processors on GPGPUs.

Your explanation works when it's just a few threads where code can be hand optimized, but what about when you have 100s of stream processors that you have to have running in parallel to make viable use of their processing power? That can't realistically be hand optimized given the amount of time it would take, and even if it could be there's still physical problems in that calculations need to have been worked out before the results can be used for further processing...I thought that was the whole reason that parallel computing is limited to the extent that it is?

Or am I completely wrong in how I'm looking at it?



Ah, the return of mighty "GeePeeGeePee U" in threads I see. ;)

On serious note, I think any potential paradigm shift (to a degree) will happen only if NextBox and/or PS4 have fully HSA capable APU/GPU. Well, maybe GPU compute context switching is not a must (I think that's scheduled for 2014), but at least unified CPU/GPU memory address space (which is scheduled for 2013 anyway). I suspect MS might have done what they did with Xenos, and pushed AMD to implement full HSA features into their next console before they are released as desktop parts, but that's just my guesstimate.



spurgeonryan said:

Will microsoft still have the best 3rd party games version knowing that Sony and Nintendo together move more consoles globally than Microsoft? Do you think Microsoft will be able to sustain that kind of damage knowing that they have just a few 1st party titles? Microsoft has put itself in that spot, it was just about time Nintendo and Sony realize that. My theory says, they are working together to see what happen with Microsoft. And remember, Bill Gates was just happy that he could have a share on the industry, but not because they wanted to deliver a new console experience, reason why they don't have many exclusives."

 

Source: cbarroso09


Embracing GPGPU tech doesn't matter at all. What matters is how much literal power they are going to dedicate to the machine. If they go down a green route, console gaming is going to die due to a complete lack of evolution. If they throw 200W+ at it, then we will finally see some improvements in gaming and games worthy of being labelled as "next-gen".