By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - What do you think will happen if Sony embrace the GPGPU architecture like Nintendo for next gen?

If it works.. and seems popular to gamers.. then both Microsoft and Sony will copy the idea.. after all look what happened with the Wii...It is really kind of sad.. but that will be how it's done...



Nintendo Wii by generations...

1. Wii

2. Wii U

3. Wii O U

Predictions made by gamers concerning the current Nintendo line up of games.

Pikmen 3= Little Bump to nothing. (Got Little Bump)

Wind Waker HD= Won't sell anything (The explosion happened here and at one time 4 Wii U games was in the Amazon top 100)

Super Mario 3D World= Won't help at all looks cheap. (Currently the most sought after Wii U game and continuing the Wii U increase.)

Around the Network

GPGPUs are the future of GPUs so all next gen consoles will use it or at least I think it is the future



    R.I.P Mr Iwata :'(

The world would blow up



Roma said:
GPGPUs are the future of GPUs so all next gen consoles will use it or at least I think it is the future

nah, not the future, it's the present

MS and Sony simply can't pick up an AMD GPU that doesn't feature GPGPU capabilities at this point



At first I was confused by thread-title and first post, but now... Sony and MS will build powerhorses. gpu capable of gpgpu (btw: read this: http://libra.msra.cn/Publication/4737596/linear-genetic-programming-gpgpu-on-microsoft-s-xbox-360) so... what was your point?



Around the Network
spurgeonryan said:
So what is a gpgpu?

The ability to run non-graphics code on a GPU. It's been around for over five years on the desktop, all current graphics cards have it, and has made no great impact on gaming. It isn't necessarily more power or cost efficient and getting code to run well on a GPU is very, very hard because of low memory bandwidth/sharing and requiring many threads. Much worse than adjusting to Cell and unlike Cell there's little reason to do so.

I don't expect it to ever be essential to a next-gen video game on any platform. The most I expect is some visual-effect-only debris physics.



Gamegears said:
If it works.. and seems popular to gamers.. then both Microsoft and Sony will copy the idea.. after all look what happened with the Wii...It is really kind of sad.. but that will be how it's done...

I really doubt that Sony and Microsoft scrapped all their plans and millions spent on development to quickly emulate the Wii U--even if it were possible to do that with games already in development.  If they end up going in this direction, which they almost certainly will, then it's simply because the industry as a whole is moving that way.



Hi guys, sorry to be late to the forum, but i was doing some researh before trying to support this topic. OK. First user "walsufnir" sent me this 

Linear genetic programming GPGPU on Microsoft’s Xbox 360

ABSTRACT

"We describe how to harness the graphics processing abilities of a consumer video game console (Xbox 360) for general programming on graphics processing unit (GPGPU) purposes. In particular, we implement a linear GP (LGP) system to solve classification and regression problems. We conduct inter- and intra-platform benchmarking of the Xbox 360 and PC, using GPU and CPU implementations on both architectures. Platform benchmarking confirms highly integrated CPU and GPU programming flexibility of the Xbox 360, having the potential to alleviate typical GPGPU decisions of allocating particular functionalities to CPU or GPU."

What I understand from here (i can't read the entire article because i have to pay for it), it is that XboX360 has GPGPU capabilites. That is correct, any GPU has the ability to contribuite to the CPU processes. But that does not mean that developers were using it. The author emphasize the flexibility of the system to GPGPU, not the actual utilization. Counting from the fact that CPU on the XboX360 works at high frequencies, then there is no need for developers to use GPGPU programing. It will be a waste. Now, this article was released on 2008, GPGPU programing was fully implemented until 2006. Yes' around the same period of the 360 release, but gaming developers were not using it at the time, reason why it has a 3.2 GHz processor. Now the Wii U has a GPGPU based GPU and an out-of-order execution CPU, making the whole programing codes different from the XBOX360, and to prove that i have copy several important aspect that from:

(Sean Baxter, 2011,http://www.moderngpu.com/ intro/intro.html)

that we
 can understand about GPGPU programing. So you can have your own conclusion:

 

 

THE PROBLEM OF GPGPUs, OR MAY I SAY THE PROBLEM OF DIRECT3D 11 WITH GPGPUs

"In retrospect, D3D 11 was a wretched toolkit for developing kernels. I like the host-side API, and how it fits in with the rasterization functions of the device. I even like the the basic workflow of the thing, using fxc to generate IL from HLSL files, just like how you generate pixel and vertex shaders. Compute shaders have full access to all texture sampler capabilities, something no other API offers.

So what exactly is wrong with D3D for GPU computing? It's the appalling quality of the shader compiler. It crashes, or it takes tens of minutes to return, or it generates bogus code.

Worse still, Microsoft has adopted a you can't handle the truth attitude towards developers concerning hardware features. Direct3D does not report warp size to the developer, and HLSL does not support volatile modifiers for groupshared memory. This is not an accidental omission; it is a design choice to insulate programmers from platform differences. While this may make sense for graphics (and I think that in 2011, that position needs revisiting), it is a pointless attitude for GPU computing. This is not an example of failing to coax a tool do something it wasn't designed to do: DirectCompute's entire purpose is to enable high-performance GPU computing inside Direct3D, and at that it is a failure. Microsoft has not bothered fixing many disastrous compiler bugs; the most recent DX SDK (June 2010) is more than a year old, and I recall the compiler not even being updated since the release of the preceding February.

 

This summer, Microsoft announced a Visual Studio compiler extension for GPU computing called C++ AMP. This is an embedded domain-specific language. It's basically a clone of CUDA's driver API, except it generates code for a Direct3D back-end rather than PTX. AMP does not enable the developer to do anything he couldn't do before. It will make easy things easier and hard things impossible. Rather than fixing their underlying tech and trying to push GPU computing to places it hasn't been yet, MS is repackaging a failed technology and obscuring already dark waters by introducing yet another API.

I do not expect to see wide adoption of either DirectCompute flavor. Microsoft doesn't contribute to the OpenCL architectural review board, and there is little appetite among programmers to see vendor lock-in for GPU computing when there are already superior cross-platform options." (Sean Baxter, 2011,http://www.moderngpu.com/ intro/intro.html)

 "To reduce control hardware to a minimum while hitting near 100% execution efficiency, GPUs employ two design features: SIMD execution and latency hiding. These are a disruptive technology in the history of computing. Textbook algorithms that would compile on any platform from the past three decades will not run, or will run very inefficiently, on GPU hardware. Hardware vendors promote the performance of GPUs, which is in fact astonishing, but soft peddle the architectural differences, so as not to scare off IT managers. The truth is that GPUs are a different beast. You can toss out your algorithms books; they are of no use here. By keeping SIMD execution and latency hiding in mind, however, you will discover GPGPU idioms that allow the development of traditional CS functions (such as sorts, covered here in depth), using very different algorithms and workflow."

The true cost of memory operations is not the number of bytes transferred, but the number of transactions serialized through the memory controllers. 

"The flagship NVIDIA card, the GTX 580, has 16 SMs and runs with a shader clock speed of 1544 MHz. Each SM has 32 ALUs that can retire a fused multiply-and-add (that's two ops) per cycle. The product of these factors (1,544,000,000 x 16 x 32 x 2) is a staggering 1581.1 GFlops. ATI currently manufactures devices with arrays of VLIW vector processors per core, for chips with an even higher density of ALUs. The flagship ATI GPU, the Radeon 6970, has 24 SMs, each with 16 4-way VLIW4 ALUs. It comes clocked at 880MHz. Executing fused multiply-and-adds (2 ops) gives a staggering theoretical arithmetic throughput of 2703.3 GFlops (880,000,000 x 24 x 16 x 4 x 2)." NVIDIA CARD FREQUENCY DOUBLES THE RADEON CARD, BUT IT IS ACTUALLY THE RADEON CARD THE ONE THAT OUTPERFORM .

 

There you have it, not from me, but from GPGPU programers. Things that we understand from actual consoles do not apply any more for the nextgen consoles (read paragraph above regarding clock speeds on GPU). This means Nintendo is in the same spot as PS3 this generation. No game can look superior on the Wii U beause simply the utilization of the codes are not the correct. 

Now, if Sony adopt GPGPU and Microsoft don't, what will happen? Will the developers support xbox720 more than Wii U and PS4, when both together move more consoles and software than Microsoft alone?  Remember that Microsoft relies on 3rd party, they position themselves in that spot because developers were using the most efficient and easy codes to program at the time. 

 

 

 



"This means Nintendo is in the same spot as PS3 this generation. No game can look superior on the Wii U beause simply the utilization of the codes are not the correct. "

I'm confused by this statement. "same spot as the PS3, no game can look superior on wii u" ? Last I checked PS3 has some of the most amazing looking games this gen.



GPGPU is like getting teh performance out of teh Cell but 10* harder. I doubt that developers will really bother with the Wii U.



Tease.