By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - Wii U's eDRAM stronger than given credit?

tanok said:
jake_the_fake1 said:
fatslob-:O said:
jake_the_fake1 said:

Exactly right.

Now that we are in agreement, and since you haven't rebutted by comments on the WiiU GPU being piss weak, I can assume that we are also in agreement there too.

We can finally move forward and look at what would be more reasonable, would it be more reasonable for Nintendo to pair a piss weak GPU with Ultra high bandwidth EDram (@500GB/s) despite the GPU being incapable of ever using it's full potential, or would it be more reasonable for Nintendo to put in an EDram with moderate bandwidth (@60-80GB/s) knowing that the GPU could use it's potential and still have a little headroom?

Keep in mind, the first option is not efficient and is costly while the second option is both efficient and cost effective. Which of the two viable option sounds like what Nintendo has done and would do?

---

In regards to tessellation;

Tessellation requires a capable GPU and not just bandwidth, you know as well as I do that just having bandwidth does nothing, it's the GPU processors that do the work.  Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering.

So lets put this into perspective, the Titan Black only has 336GB/s of raw bandwidth compared to your WiiU EDram cache of 500GB/s but the Titan still obliterates the WiiU GPU in tessellation. Why, because the Titan black simply has more power processors, nearing 3000, to do this resource heavy task, again showing that processing capabilities are more important to tessellation than pure bandwidth.

Do you know how tessellation works ? 

Well enough, don't get me wrong I'm no expert but then again i'm not claiming to be. So if you feel I misspoke or have info you feel I should know, then hit me up cuz I'm all about learning :)

 

sorry dude, but what megafenix says its right, wii u can use tessleation, the gpu is more than capable to use it, just not at 1080p thats for sure

here

http://hdwarriors.com/shinen-on-the-practical-use-of-adaptive-tessellation-upcoming-games/

 

Exactly right.

Just to be clear I did not say that the WiiU couldn't do tessellation rather that it's limited because of the piss weak GPU, it's for this reason I said "Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering." Key words being 'Practical', and 'Real time'.

The point of tessellation is that you remove the main ram and bandwidth overhead, which is caused by having mutiple level of detailed meshes in ram, and instead you have one mesh which through tessellation and map displacement  yield the same result if not better. The trade off is your free ram and bandwidth but now take a hit on GPU resources....Most often then not on the WiiU, it would be better to use the limited GPU resources to have better frame rates, better Anti-aliasing, or more objects on screen, rather than spend that limited resource on tessellation where the end result could be worse . There of course will be times where tessellation would be used, to the degree it's used will depend on the game and the developer goals, but it won't be for the majority of tittles since again it comes down to practically of tessellation in real time.

My point was used to illustrate to megafenix that bandwidth alone can not do anything to improve graphics (he asserted that tons of bandwidth gives you tessellation), rather the GPU processors are the ones that do the heavy lifting while bandwidth keeps them working, he himself acknowledged this "bandwidth is not going to gie you more power..." http://gamrconnect.vgchartz.com/post.php?id=6101413



Around the Network
jake_the_fake1 said:
tanok said:
jake_the_fake1 said:
fatslob-:O said:
jake_the_fake1 said:

Exactly right.

Now that we are in agreement, and since you haven't rebutted by comments on the WiiU GPU being piss weak, I can assume that we are also in agreement there too.

We can finally move forward and look at what would be more reasonable, would it be more reasonable for Nintendo to pair a piss weak GPU with Ultra high bandwidth EDram (@500GB/s) despite the GPU being incapable of ever using it's full potential, or would it be more reasonable for Nintendo to put in an EDram with moderate bandwidth (@60-80GB/s) knowing that the GPU could use it's potential and still have a little headroom?

Keep in mind, the first option is not efficient and is costly while the second option is both efficient and cost effective. Which of the two viable option sounds like what Nintendo has done and would do?

---

In regards to tessellation;

Tessellation requires a capable GPU and not just bandwidth, you know as well as I do that just having bandwidth does nothing, it's the GPU processors that do the work.  Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering.

So lets put this into perspective, the Titan Black only has 336GB/s of raw bandwidth compared to your WiiU EDram cache of 500GB/s but the Titan still obliterates the WiiU GPU in tessellation. Why, because the Titan black simply has more power processors, nearing 3000, to do this resource heavy task, again showing that processing capabilities are more important to tessellation than pure bandwidth.

Do you know how tessellation works ? 

Well enough, don't get me wrong I'm no expert but then again i'm not claiming to be. So if you feel I misspoke or have info you feel I should know, then hit me up cuz I'm all about learning :)

 

sorry dude, but what megafenix says its right, wii u can use tessleation, the gpu is more than capable to use it, just not at 1080p thats for sure

here

http://hdwarriors.com/shinen-on-the-practical-use-of-adaptive-tessellation-upcoming-games/

 

Exactly right.

Just to be clear I did not say that the WiiU couldn't do tessellation rather that it's limited because of the piss weak GPU, it's for this reason I said "Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering." Key words being 'Practical', and 'Real time'.

The point of tessellation is that you remove the main ram and bandwidth overhead, which is caused by having mutiple level of detailed meshes in ram, and instead you have one mesh which through tessellation and map displacement  yield the same result if not better. The trade off is your free ram and bandwidth but now take a hit on GPU resources....Most often then not on the WiiU, it would be better to use the limited GPU resources to have better frame rates, better Anti-aliasing, or more objects on screen, rather than spend that limited resource on tessellation where the end result could be worse . There of course will be times where tessellation would be used, to the degree it's used will depend on the game and the developer goals, but it won't be for the majority of tittles since again it comes down to practically of tessellation in real time.

My point was used to illustrate to megafenix that bandwidth alone can not do anything to improve graphics (he asserted that tons of bandwidth gives you tessellation), rather the GPU processors are the ones that do the heavy lifting while bandwidth keeps them working, he himself acknowledged this "bandwidth is not going to gie you more power..." http://gamrconnect.vgchartz.com/post.php?id=6101413


just to be clear, its not just shinen comments that gpus now days have less overhead on doing tesselation than previous gpus, but also we have a game that was using tesselation with dispalcements in real time

 

here

shadow of the eternals wii u and pc minute 7:50, listen to what they say about the graphics here until the end

https://www.youtube.com/watch?v=QlREuZz7MwE

 

check iy out

i really dont see a limited gpu if it can produce those graphics with tesselation+displacements, and thats just the beginning cause was using the old cryengine 3 which hasnt been optimized to much for wii u, but over the time will get better

 

wii u gpu should be like 400 to 500 gigaflops, that doesnt sound as much but is not just a matter of power, efficiency counts and tesselators have been improving since the hd2000, hell, even the hd5000 and 6000 gpus have a better tessealtor engine than the hd4000 although the architecture of the simd cores remains almost the same

 

wii u gpu is capable of using tesselation, and nintendo was interested in this techniqe since before the launch of wii, and the prove of that are those patents on displacement mapping and tesselation, surely they wouldnt miss the chance on wii u. Sorry but that a fact, may not be as power hungry as xbox one and ps4 but has enough feats in order to produce good graphics with tesselation+displacements. In what i wont argue is if is capable of doing it at 1080p cause i doubt it and even if it could would be at low and very unstable framerate, but at 720p and between 30 to 60fps is perfectly capable

 

how can you say bandwidth does nothing

so the ps4 would be able to do the same things with a bandwidth like the xbox one of 56GB/s but without having esram?

seriosuly you need to read more about gpus

i recommend things like this

http://www.tomshardware.com/reviews/radeon-hd-4850,1957-5.html

and better this

http://www.realworldtech.com/gpu-memory-bandwidth/

"

In some cases, the GPU with the lower GFLOP/s actually delivers the best performance – which is totally counter-intuitive. One pair of points that perfectly illustrates counter-intuitive behavior is the first two AMD GPUs. The shader arrays provide 432 and 422 GFLOP/s respectively, but the first card only scores 2552 on 3DMark, while the latter scores a significantly higher 3463. One card has ~2% less shader compute, but 36% higher performance. This behavior is hardly isolated to AMD cards either. Three Nvidia GPUs have 192 GFLOP/s throughput in their shader arrays. Two of these cards score 3700 and 3374, while the third is a disappointing 2527. Despite having the same theoretical throughput, one of the cards is 46% faster than another.

What could be responsible for these mysterious and seemingly contradictory results? Looking at the basic architecture of a GPU like AMD’s Cayman , the shader array is just one part of the design. Admittedly it is perhaps the most important, but modern GPUs contain a variety of other hardware including fixed functions like the triangle setup engine, texture caches and sampling units, raster output pipelines (ROPs) and the memory controllers, while also relying on the driver software. Of these different areas, the one that is most critical to performance is the memory controllers and physical interfaces to DRAM. 3D graphics is an incredibly bandwidth hungry workload – to the point that high-end GPUs use bandwidth optimized GDDR5 DRAM rather than the less expensive DDR3 used for system memory. Note that in modern GPUs, each memory interface typically has its own ROPs – so to some extent, memory bandwidth will also take into account some fixed functions as well.

So our initial guess is that when two similar GPUs have substantially different performance, the real cause is the memory interfaces and available bandwidth. This seems eminently reasonable, especially since most CPU performance models also recognize the critical important of memory in determining the behavior of a workload.

"

 

we already know tesselation has a cost, already amd said that a 4870 needs to trade off 30% performance or 33fps for 400x more polygons, but since the wii u gpu is custom we dont really know how much the efficiency has been improved, maybe the tade off is 20% now(there are parts in the gpu that have not been identified and even those which have been give a sense of doubt because may be similar but not eactly the same), who knows but one thing is for sure, nintendo was interested in this technique long ago and surely they wouldnt put that much edram bandwidth for nothing, even 1080p framebuffer just takes up 16MB out of the 32MB+the other 3MB, so surely you can fit textures and vertex texture fetch data,z buffer and other stuff, and using 720p only would take up 7MB which leves both a lot edram memory bndwidth for other stuff and aliviated the power rquired to render by a lot

 

 




tanok said:
jake_the_fake1 said:
tanok said:
jake_the_fake1 said:
fatslob-:O said:
jake_the_fake1 said:

Exactly right.

Now that we are in agreement, and since you haven't rebutted by comments on the WiiU GPU being piss weak, I can assume that we are also in agreement there too.

We can finally move forward and look at what would be more reasonable, would it be more reasonable for Nintendo to pair a piss weak GPU with Ultra high bandwidth EDram (@500GB/s) despite the GPU being incapable of ever using it's full potential, or would it be more reasonable for Nintendo to put in an EDram with moderate bandwidth (@60-80GB/s) knowing that the GPU could use it's potential and still have a little headroom?

Keep in mind, the first option is not efficient and is costly while the second option is both efficient and cost effective. Which of the two viable option sounds like what Nintendo has done and would do?

---

In regards to tessellation;

Tessellation requires a capable GPU and not just bandwidth, you know as well as I do that just having bandwidth does nothing, it's the GPU processors that do the work.  Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering.

So lets put this into perspective, the Titan Black only has 336GB/s of raw bandwidth compared to your WiiU EDram cache of 500GB/s but the Titan still obliterates the WiiU GPU in tessellation. Why, because the Titan black simply has more power processors, nearing 3000, to do this resource heavy task, again showing that processing capabilities are more important to tessellation than pure bandwidth.

Do you know how tessellation works ? 

Well enough, don't get me wrong I'm no expert but then again i'm not claiming to be. So if you feel I misspoke or have info you feel I should know, then hit me up cuz I'm all about learning :)

 

sorry dude, but what megafenix says its right, wii u can use tessleation, the gpu is more than capable to use it, just not at 1080p thats for sure

here

http://hdwarriors.com/shinen-on-the-practical-use-of-adaptive-tessellation-upcoming-games/

 

Exactly right.

Just to be clear I did not say that the WiiU couldn't do tessellation rather that it's limited because of the piss weak GPU, it's for this reason I said "Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering." Key words being 'Practical', and 'Real time'.

The point of tessellation is that you remove the main ram and bandwidth overhead, which is caused by having mutiple level of detailed meshes in ram, and instead you have one mesh which through tessellation and map displacement  yield the same result if not better. The trade off is your free ram and bandwidth but now take a hit on GPU resources....Most often then not on the WiiU, it would be better to use the limited GPU resources to have better frame rates, better Anti-aliasing, or more objects on screen, rather than spend that limited resource on tessellation where the end result could be worse . There of course will be times where tessellation would be used, to the degree it's used will depend on the game and the developer goals, but it won't be for the majority of tittles since again it comes down to practically of tessellation in real time.

My point was used to illustrate to megafenix that bandwidth alone can not do anything to improve graphics (he asserted that tons of bandwidth gives you tessellation), rather the GPU processors are the ones that do the heavy lifting while bandwidth keeps them working, he himself acknowledged this "bandwidth is not going to gie you more power..." http://gamrconnect.vgchartz.com/post.php?id=6101413


just to be clear, its not just shinen comments that gpus now days have less overhead on doing tesselation than previous gpus, but also we have a game that was using tesselation with dispalcements in real time

 

here

shadow of the eternals wii u and pc minute 7:50, listen to what they say about the graphics here until the end

https://www.youtube.com/watch?v=QlREuZz7MwE

 

check iy out

i really dont see a limited gpu if it can produce those graphics with tesselation+displacements, and thats just the beginning cause was using the old cryengine 3 which hasnt been optimized to much for wii u, but over the time will get better

 

wii u gpu should be like 400 to 500 gigaflops, that doesnt sound as much but is not just a matter of power, efficiency counts and tesselators have been improving since the hd2000, hell, even the hd5000 and 6000 gpus have a better tessealtor engine than the hd4000 although the architecture of the simd cores remains almost the same

 

wii u gpu is capable of using tesselation, and nintendo was interested in this techniqe since before the launch of wii, and the prove of that are those patents on displacement mapping and tesselation, surely they wouldnt miss the chance on wii u. Sorry but that a fact, may not be as power hungry as xbox one and ps4 but has enough feats in order to produce good graphics with tesselation+displacements. In what i wont argue is if is capable of doing it at 1080p cause i doubt it and even if it could would be at low and very unstable framerate, but at 720p and between 30 to 60fps is perfectly capable

 

how can you say bandwidth does nothing

so the ps4 would be able to do the same things with a bandwidth like the xbox one of 56GB/s but without having esram?

seriosuly you need to read more about gpus

i recommend things like this

http://www.tomshardware.com/reviews/radeon-hd-4850,1957-5.html

and better this

http://www.realworldtech.com/gpu-memory-bandwidth/

"

In some cases, the GPU with the lower GFLOP/s actually delivers the best performance – which is totally counter-intuitive. One pair of points that perfectly illustrates counter-intuitive behavior is the first two AMD GPUs. The shader arrays provide 432 and 422 GFLOP/s respectively, but the first card only scores 2552 on 3DMark, while the latter scores a significantly higher 3463. One card has ~2% less shader compute, but 36% higher performance. This behavior is hardly isolated to AMD cards either. Three Nvidia GPUs have 192 GFLOP/s throughput in their shader arrays. Two of these cards score 3700 and 3374, while the third is a disappointing 2527. Despite having the same theoretical throughput, one of the cards is 46% faster than another.

What could be responsible for these mysterious and seemingly contradictory results? Looking at the basic architecture of a GPU like AMD’s Cayman , the shader array is just one part of the design. Admittedly it is perhaps the most important, but modern GPUs contain a variety of other hardware including fixed functions like the triangle setup engine, texture caches and sampling units, raster output pipelines (ROPs) and the memory controllers, while also relying on the driver software. Of these different areas, the one that is most critical to performance is the memory controllers and physical interfaces to DRAM. 3D graphics is an incredibly bandwidth hungry workload – to the point that high-end GPUs use bandwidth optimized GDDR5 DRAM rather than the less expensive DDR3 used for system memory. Note that in modern GPUs, each memory interface typically has its own ROPs – so to some extent, memory bandwidth will also take into account some fixed functions as well.

So our initial guess is that when two similar GPUs have substantially different performance, the real cause is the memory interfaces and available bandwidth. This seems eminently reasonable, especially since most CPU performance models also recognize the critical important of memory in determining the behavior of a workload.

"

 

we already know tesselation has a cost, already amd said that a 4870 needs to trade off 30% performance or 33fps for 400x more polygons, but since the wii u gpu is custom we dont really know how much the efficiency has been improved, maybe the tade off is 20% now, who knows but one thing is for sure, nintendo was interested in this technique long ago and surely they wouldnt put that much edram bandwidth for nothing, even 1080p framebuffer just takes up 16MB out of the 32MB+the other 3MB, so surely you can fit textures and vertex texture fetch data,z buffer and other stuff, and using 720p only would take up 7MB which leves both a lot edram memory bndwidth for other stuff and aliviated the power rquired to render by a lot

 

 



To answer your question op, the only reason people thought wiiu edram was weak was because launch ports didn't match 360/ps3 games, so they thought the ram set up wasn't feeding the GPU properly, which at the time many nintendo fans thought it was around 800 gflops gpu, turns out the wiiu gpu has less flops then 360/ps3 but is more efficient, so the ram set is actually perfect for the wiiu and is more then enough to feed the gpu.



starworld said:
tanok said:
jake_the_fake1 said:
tanok said:
jake_the_fake1 said:
fatslob-:O said:
jake_the_fake1 said:

Exactly right.

Now that we are in agreement, and since you haven't rebutted by comments on the WiiU GPU being piss weak, I can assume that we are also in agreement there too.

We can finally move forward and look at what would be more reasonable, would it be more reasonable for Nintendo to pair a piss weak GPU with Ultra high bandwidth EDram (@500GB/s) despite the GPU being incapable of ever using it's full potential, or would it be more reasonable for Nintendo to put in an EDram with moderate bandwidth (@60-80GB/s) knowing that the GPU could use it's potential and still have a little headroom?

Keep in mind, the first option is not efficient and is costly while the second option is both efficient and cost effective. Which of the two viable option sounds like what Nintendo has done and would do?

---

In regards to tessellation;

Tessellation requires a capable GPU and not just bandwidth, you know as well as I do that just having bandwidth does nothing, it's the GPU processors that do the work.  Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering.

So lets put this into perspective, the Titan Black only has 336GB/s of raw bandwidth compared to your WiiU EDram cache of 500GB/s but the Titan still obliterates the WiiU GPU in tessellation. Why, because the Titan black simply has more power processors, nearing 3000, to do this resource heavy task, again showing that processing capabilities are more important to tessellation than pure bandwidth.

Do you know how tessellation works ? 

Well enough, don't get me wrong I'm no expert but then again i'm not claiming to be. So if you feel I misspoke or have info you feel I should know, then hit me up cuz I'm all about learning :)

 

sorry dude, but what megafenix says its right, wii u can use tessleation, the gpu is more than capable to use it, just not at 1080p thats for sure

here

http://hdwarriors.com/shinen-on-the-practical-use-of-adaptive-tessellation-upcoming-games/

 

Exactly right.

Just to be clear I did not say that the WiiU couldn't do tessellation rather that it's limited because of the piss weak GPU, it's for this reason I said "Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering." Key words being 'Practical', and 'Real time'.

The point of tessellation is that you remove the main ram and bandwidth overhead, which is caused by having mutiple level of detailed meshes in ram, and instead you have one mesh which through tessellation and map displacement  yield the same result if not better. The trade off is your free ram and bandwidth but now take a hit on GPU resources....Most often then not on the WiiU, it would be better to use the limited GPU resources to have better frame rates, better Anti-aliasing, or more objects on screen, rather than spend that limited resource on tessellation where the end result could be worse . There of course will be times where tessellation would be used, to the degree it's used will depend on the game and the developer goals, but it won't be for the majority of tittles since again it comes down to practically of tessellation in real time.

My point was used to illustrate to megafenix that bandwidth alone can not do anything to improve graphics (he asserted that tons of bandwidth gives you tessellation), rather the GPU processors are the ones that do the heavy lifting while bandwidth keeps them working, he himself acknowledged this "bandwidth is not going to gie you more power..." http://gamrconnect.vgchartz.com/post.php?id=6101413


just to be clear, its not just shinen comments that gpus now days have less overhead on doing tesselation than previous gpus, but also we have a game that was using tesselation with dispalcements in real time

 

here

shadow of the eternals wii u and pc minute 7:50, listen to what they say about the graphics here until the end

https://www.youtube.com/watch?v=QlREuZz7MwE

 

check iy out

i really dont see a limited gpu if it can produce those graphics with tesselation+displacements, and thats just the beginning cause was using the old cryengine 3 which hasnt been optimized to much for wii u, but over the time will get better

 

wii u gpu should be like 400 to 500 gigaflops, that doesnt sound as much but is not just a matter of power, efficiency counts and tesselators have been improving since the hd2000, hell, even the hd5000 and 6000 gpus have a better tessealtor engine than the hd4000 although the architecture of the simd cores remains almost the same

 

wii u gpu is capable of using tesselation, and nintendo was interested in this techniqe since before the launch of wii, and the prove of that are those patents on displacement mapping and tesselation, surely they wouldnt miss the chance on wii u. Sorry but that a fact, may not be as power hungry as xbox one and ps4 but has enough feats in order to produce good graphics with tesselation+displacements. In what i wont argue is if is capable of doing it at 1080p cause i doubt it and even if it could would be at low and very unstable framerate, but at 720p and between 30 to 60fps is perfectly capable

 

how can you say bandwidth does nothing

so the ps4 would be able to do the same things with a bandwidth like the xbox one of 56GB/s but without having esram?

seriosuly you need to read more about gpus

i recommend things like this

http://www.tomshardware.com/reviews/radeon-hd-4850,1957-5.html

and better this

http://www.realworldtech.com/gpu-memory-bandwidth/

"

In some cases, the GPU with the lower GFLOP/s actually delivers the best performance – which is totally counter-intuitive. One pair of points that perfectly illustrates counter-intuitive behavior is the first two AMD GPUs. The shader arrays provide 432 and 422 GFLOP/s respectively, but the first card only scores 2552 on 3DMark, while the latter scores a significantly higher 3463. One card has ~2% less shader compute, but 36% higher performance. This behavior is hardly isolated to AMD cards either. Three Nvidia GPUs have 192 GFLOP/s throughput in their shader arrays. Two of these cards score 3700 and 3374, while the third is a disappointing 2527. Despite having the same theoretical throughput, one of the cards is 46% faster than another.

What could be responsible for these mysterious and seemingly contradictory results? Looking at the basic architecture of a GPU like AMD’s Cayman , the shader array is just one part of the design. Admittedly it is perhaps the most important, but modern GPUs contain a variety of other hardware including fixed functions like the triangle setup engine, texture caches and sampling units, raster output pipelines (ROPs) and the memory controllers, while also relying on the driver software. Of these different areas, the one that is most critical to performance is the memory controllers and physical interfaces to DRAM. 3D graphics is an incredibly bandwidth hungry workload – to the point that high-end GPUs use bandwidth optimized GDDR5 DRAM rather than the less expensive DDR3 used for system memory. Note that in modern GPUs, each memory interface typically has its own ROPs – so to some extent, memory bandwidth will also take into account some fixed functions as well.

So our initial guess is that when two similar GPUs have substantially different performance, the real cause is the memory interfaces and available bandwidth. This seems eminently reasonable, especially since most CPU performance models also recognize the critical important of memory in determining the behavior of a workload.

"

 

we already know tesselation has a cost, already amd said that a 4870 needs to trade off 30% performance or 33fps for 400x more polygons, but since the wii u gpu is custom we dont really know how much the efficiency has been improved, maybe the tade off is 20% now, who knows but one thing is for sure, nintendo was interested in this technique long ago and surely they wouldnt put that much edram bandwidth for nothing, even 1080p framebuffer just takes up 16MB out of the 32MB+the other 3MB, so surely you can fit textures and vertex texture fetch data,z buffer and other stuff, and using 720p only would take up 7MB which leves both a lot edram memory bndwidth for other stuff and aliviated the power rquired to render by a lot

 

 



To answer your question op, the only reason people thought wiiu edram was weak was because launch ports didn't match 360/ps3 games, so they thought the ram set up wasn't feeding the GPU properly, which at the time many nintendo fans thought it was around 800 gflops gpu, turns out the wiiu gpu has less flops then 360/ps3 but is more efficient, so the ram set is actually perfect for the wiiu and is more then enough to feed the gpu.

 

not exaclty, wii u has more flops than 360 and ps3, about 400 to 500 gigaflops, that why qhick ports work on wii u, and when i say quick i also mean that wii u efficiency is not being used casue the ports force the wii u hardware into doing things at the style wasnt made for and wtill works so surely ports wouldnt even work under those cisrcunstances and they are, so just by logic is obvious that wii u hpacks more power, and that has already been said by many including shinen multimedia, who has sad thatwii u both packs more power than previous gen and also is more advanced and has more bandwidth

 

the 176GB/s is just trolling, is impossible to get your quick ports to even work with that, if iy was a ground up game utilizing the wii u features and using shader model 4 to 5 instead of directx9 parameters and also using something like compute shaders to compensate thrn yea, probably it could,the problem with the cpu that is weaker than the 360 in gigaflops(already many developers have admited they havent used compyte shaders even though the system is capable of it) but since they havent and the game still woks then its obvious the syeyem has to be more pòwerful

 

just take for example the ps3 and xbox 360

which is more powerful in flops?

ps3

which version of bayonetta is the best?

 

why?

because ps3 was a port

but what if for example we make a port from last of us for 360

and when i say a port i mean a very quick and inefficent port like those for wii u, would it even run on 360?

 

wii u is about 2x more powerful in gigaflops than the xbox 360, which isnt much but enough for what was intedned for



tanok said:
starworld said:
tanok said:
jake_the_fake1 said:
tanok said:
jake_the_fake1 said:
fatslob-:O said:
jake_the_fake1 said:

Exactly right.

Now that we are in agreement, and since you haven't rebutted by comments on the WiiU GPU being piss weak, I can assume that we are also in agreement there too.

We can finally move forward and look at what would be more reasonable, would it be more reasonable for Nintendo to pair a piss weak GPU with Ultra high bandwidth EDram (@500GB/s) despite the GPU being incapable of ever using it's full potential, or would it be more reasonable for Nintendo to put in an EDram with moderate bandwidth (@60-80GB/s) knowing that the GPU could use it's potential and still have a little headroom?

Keep in mind, the first option is not efficient and is costly while the second option is both efficient and cost effective. Which of the two viable option sounds like what Nintendo has done and would do?

---

In regards to tessellation;

Tessellation requires a capable GPU and not just bandwidth, you know as well as I do that just having bandwidth does nothing, it's the GPU processors that do the work.  Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering.

So lets put this into perspective, the Titan Black only has 336GB/s of raw bandwidth compared to your WiiU EDram cache of 500GB/s but the Titan still obliterates the WiiU GPU in tessellation. Why, because the Titan black simply has more power processors, nearing 3000, to do this resource heavy task, again showing that processing capabilities are more important to tessellation than pure bandwidth.

Do you know how tessellation works ? 

Well enough, don't get me wrong I'm no expert but then again i'm not claiming to be. So if you feel I misspoke or have info you feel I should know, then hit me up cuz I'm all about learning :)

 

sorry dude, but what megafenix says its right, wii u can use tessleation, the gpu is more than capable to use it, just not at 1080p thats for sure

here

http://hdwarriors.com/shinen-on-the-practical-use-of-adaptive-tessellation-upcoming-games/

 

Exactly right.

Just to be clear I did not say that the WiiU couldn't do tessellation rather that it's limited because of the piss weak GPU, it's for this reason I said "Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering." Key words being 'Practical', and 'Real time'.

The point of tessellation is that you remove the main ram and bandwidth overhead, which is caused by having mutiple level of detailed meshes in ram, and instead you have one mesh which through tessellation and map displacement  yield the same result if not better. The trade off is your free ram and bandwidth but now take a hit on GPU resources....Most often then not on the WiiU, it would be better to use the limited GPU resources to have better frame rates, better Anti-aliasing, or more objects on screen, rather than spend that limited resource on tessellation where the end result could be worse . There of course will be times where tessellation would be used, to the degree it's used will depend on the game and the developer goals, but it won't be for the majority of tittles since again it comes down to practically of tessellation in real time.

My point was used to illustrate to megafenix that bandwidth alone can not do anything to improve graphics (he asserted that tons of bandwidth gives you tessellation), rather the GPU processors are the ones that do the heavy lifting while bandwidth keeps them working, he himself acknowledged this "bandwidth is not going to gie you more power..." http://gamrconnect.vgchartz.com/post.php?id=6101413


just to be clear, its not just shinen comments that gpus now days have less overhead on doing tesselation than previous gpus, but also we have a game that was using tesselation with dispalcements in real time

 

here

shadow of the eternals wii u and pc minute 7:50, listen to what they say about the graphics here until the end

https://www.youtube.com/watch?v=QlREuZz7MwE

 

check iy out

i really dont see a limited gpu if it can produce those graphics with tesselation+displacements, and thats just the beginning cause was using the old cryengine 3 which hasnt been optimized to much for wii u, but over the time will get better

 

wii u gpu should be like 400 to 500 gigaflops, that doesnt sound as much but is not just a matter of power, efficiency counts and tesselators have been improving since the hd2000, hell, even the hd5000 and 6000 gpus have a better tessealtor engine than the hd4000 although the architecture of the simd cores remains almost the same

 

wii u gpu is capable of using tesselation, and nintendo was interested in this techniqe since before the launch of wii, and the prove of that are those patents on displacement mapping and tesselation, surely they wouldnt miss the chance on wii u. Sorry but that a fact, may not be as power hungry as xbox one and ps4 but has enough feats in order to produce good graphics with tesselation+displacements. In what i wont argue is if is capable of doing it at 1080p cause i doubt it and even if it could would be at low and very unstable framerate, but at 720p and between 30 to 60fps is perfectly capable

 

how can you say bandwidth does nothing

so the ps4 would be able to do the same things with a bandwidth like the xbox one of 56GB/s but without having esram?

seriosuly you need to read more about gpus

i recommend things like this

http://www.tomshardware.com/reviews/radeon-hd-4850,1957-5.html

and better this

http://www.realworldtech.com/gpu-memory-bandwidth/

"

In some cases, the GPU with the lower GFLOP/s actually delivers the best performance – which is totally counter-intuitive. One pair of points that perfectly illustrates counter-intuitive behavior is the first two AMD GPUs. The shader arrays provide 432 and 422 GFLOP/s respectively, but the first card only scores 2552 on 3DMark, while the latter scores a significantly higher 3463. One card has ~2% less shader compute, but 36% higher performance. This behavior is hardly isolated to AMD cards either. Three Nvidia GPUs have 192 GFLOP/s throughput in their shader arrays. Two of these cards score 3700 and 3374, while the third is a disappointing 2527. Despite having the same theoretical throughput, one of the cards is 46% faster than another.

What could be responsible for these mysterious and seemingly contradictory results? Looking at the basic architecture of a GPU like AMD’s Cayman , the shader array is just one part of the design. Admittedly it is perhaps the most important, but modern GPUs contain a variety of other hardware including fixed functions like the triangle setup engine, texture caches and sampling units, raster output pipelines (ROPs) and the memory controllers, while also relying on the driver software. Of these different areas, the one that is most critical to performance is the memory controllers and physical interfaces to DRAM. 3D graphics is an incredibly bandwidth hungry workload – to the point that high-end GPUs use bandwidth optimized GDDR5 DRAM rather than the less expensive DDR3 used for system memory. Note that in modern GPUs, each memory interface typically has its own ROPs – so to some extent, memory bandwidth will also take into account some fixed functions as well.

So our initial guess is that when two similar GPUs have substantially different performance, the real cause is the memory interfaces and available bandwidth. This seems eminently reasonable, especially since most CPU performance models also recognize the critical important of memory in determining the behavior of a workload.

"

 

we already know tesselation has a cost, already amd said that a 4870 needs to trade off 30% performance or 33fps for 400x more polygons, but since the wii u gpu is custom we dont really know how much the efficiency has been improved, maybe the tade off is 20% now, who knows but one thing is for sure, nintendo was interested in this technique long ago and surely they wouldnt put that much edram bandwidth for nothing, even 1080p framebuffer just takes up 16MB out of the 32MB+the other 3MB, so surely you can fit textures and vertex texture fetch data,z buffer and other stuff, and using 720p only would take up 7MB which leves both a lot edram memory bndwidth for other stuff and aliviated the power rquired to render by a lot

 

 



To answer your question op, the only reason people thought wiiu edram was weak was because launch ports didn't match 360/ps3 games, so they thought the ram set up wasn't feeding the GPU properly, which at the time many nintendo fans thought it was around 800 gflops gpu, turns out the wiiu gpu has less flops then 360/ps3 but is more efficient, so the ram set is actually perfect for the wiiu and is more then enough to feed the gpu.

 

not exaclty, wii u has more flops than 360 and ps3, about 400 to 500 gigaflops, that why qhick ports work on wii u, and when i say quick i also mean that wii u efficiency is not being used casue the ports force the wii u hardware into doing things at the style wasnt made for and wtill works so surely ports wouldnt even work under those cisrcunstances and they are, so just by logic is obvious that wii u hpacks more power, and that has already been said by many including shinen multimedia, who has sad thatwii u both packs more power than previous gen and also is more advanced and has more bandwidth

 

the 176GB/s is just trolling, is impossible to get your quick ports to even work with that, if iy was a ground up game utilizing the wii u features and using shader model 4 to 5 instead of directx9 parameters and also using something like compute shaders to compensate thrn yea, probably it could,the problem with the cpu that is weaker than the 360 in gigaflops(already many developers have admited they havent used compyte shaders even though the system is capable of it) but since they havent and the game still woks then its obvious the syeyem has to be more pòwerful

 

just take for example the ps3 and xbox 360

which is more powerful in flops?

ps3

which version of bayonetta is the best?

 

why?

because ps3 was a port

but what if for example we make a port from last of us for 360

and when i say a port i mean a very quick and inefficent port like those for wii u, would it even run on 360?

 

wii u is about 2x more powerful in gigaflops than the xbox 360, which isnt much but enough for what was intedned for

what's your source?, 

here is the direct quote from the NFSWU/project car dev  Martin Griffiths

Not sure where they got that info from but the WiiU GPU has 192 Shader units, not 160. It also has 32MB of EDRAM, (the same amount as Xbox One) so comparing just the number of shader units against a PC card doesn't give a representative performance comparison. On the CPU side, WiiU also supports multi-threaded rendering that scales perfectly with the number of cores you throw at it, unlike PC DX11 deferred contexts which don't scale very well. The current WiiU build runs around 18-25fps with 5AI with all post (FXAA/Motion blur etc) enabled, which is fairly good given only the fairly cursory optimisation pass that it's had.



Around the Network
hated_individual said:
binary solo said:
drkohler said:
Oh look, it's MisterXMedia all over again. Didn't know there is such a moron in the WiiU camp, too..
And yes, 550MHz*1024bit is 560 gigaBITS/s, NOT gigaBYTES/s.... (you forgot to input that Wii U eDRAM in GPU is not a single macro, it is eight macro's, your calculation/math is correct for a single macro but not all eight of them).

So actually 70.4GB/s then. (per macro)

drkohler's calculation is for a single macro not all eight macro's combined, he forgot to input in his own calculation that Wii U has eight and not one macro.

A proper calculation would have bee; 550MHz times 1024 bit times 8 equals 563.2 GB/s

Good Lord... I actually spent (wasted..) almost an hour googling around on the internet to see where these numbers come from. I found one troll on a forum who got to a 8192bit wide bus and this mystical 560GB/s number. Unfortunately this troll was posting on a technology forum, so he was immediately ridiculed into silence.

Apparently "macro" is the new buzzword now that magically creates big numbers. I have no idea what the expression "macro" means here,  so would any of you "macropeople" answer the following questions:

1. Do you know how much die space and how much power memory controlers require for driving a 8192bit bus? - assuming 32 (!) proprietary 256bit controlers or 128 (!!) conventional 64bit controlers at 40 (45?)nm process nodes? If you got the same number as me, didn't it raise a flag in your brain, just by looking how large the entire gpu die actually is?

2. Where on the die shot are all those memory controlers? Ho many metal layers would you find to connect all of this (Hint: the Jaguar cpu only needs 11, and it has a measly 256bit bus)?

3. Why on earth would Microsoft go with a measly 109GB/s bus in the XBox One apu managed cache, if they could easily achieve twenty times the bandwidth with this magic WiiU technology?

4. Why would Nintendo engineers create a cache system that is way, way, way, way, way, too fast for the cpu/gpu system?

So after we have correctly pondered over all those minor (lol) questions, the result can be summarised as follows: 550MHz * 128Byte/s = 70GB/s is the WiiU edram bandwidth speed.



drkohler said:
hated_individual said:
binary solo said:
drkohler said:
Oh look, it's MisterXMedia all over again. Didn't know there is such a moron in the WiiU camp, too..
And yes, 550MHz*1024bit is 560 gigaBITS/s, NOT gigaBYTES/s.... (you forgot to input that Wii U eDRAM in GPU is not a single macro, it is eight macro's, your calculation/math is correct for a single macro but not all eight of them).

So actually 70.4GB/s then. (per macro)

drkohler's calculation is for a single macro not all eight macro's combined, he forgot to input in his own calculation that Wii U has eight and not one macro.

A proper calculation would have bee; 550MHz times 1024 bit times 8 equals 563.2 GB/s

Good Lord... I actually spent (wasted..) almost an hour googling around on the internet to see where these numbers come from. I found one troll on a forum who got to a 8192bit wide bus and this mystical 560GB/s number. Unfortunately this troll was posting on a technology forum, so he was immediately ridiculed into silence.

Apparently "macro" is the new buzzword now that magically creates big numbers. I have no idea what the expression "macro" means here,  so would any of you "macropeople" answer the following questions:

1. Do you know how much die space and how much power memory controlers require for driving a 8192bit bus? - assuming 32 (!) proprietary 256bit controlers or 128 (!!) conventional 64bit controlers at 40 (45?)nm process nodes? If you got the same number as me, didn't it raise a flag in your brain, just by looking how large the entire gpu die actually is?

2. Where on the die shot are all those memory controlers? Ho many metal layers would you find to connect all of this (Hint: the Jaguar cpu only needs 11, and it has a measly 256bit bus)?

3. Why on earth would Microsoft go with a measly 109GB/s bus in the XBox One apu managed cache, if they could easily achieve twenty times the bandwidth with this magic WiiU technology?

4. Why would Nintendo engineers create a cache system that is way, way, way, way, way, too fast for the cpu/gpu system?

So after we have correctly pondered over all those minor (lol) questions, the result can be summarised as follows: 550MHz * 128Byte/s = 70GB/s is the WiiU edram bandwidth speed.

Thank you for putting pretty much everything in a single and clear post. 

But do you really expect this irrefutable demonstration of common sense and logic to stop this guy from spinning further, at this point?



starworld said:
tanok said:
starworld said:
tanok said:
jake_the_fake1 said:
tanok said:
jake_the_fake1 said:
fatslob-:O said:
jake_the_fake1 said:

Exactly right.

Now that we are in agreement, and since you haven't rebutted by comments on the WiiU GPU being piss weak, I can assume that we are also in agreement there too.

We can finally move forward and look at what would be more reasonable, would it be more reasonable for Nintendo to pair a piss weak GPU with Ultra high bandwidth EDram (@500GB/s) despite the GPU being incapable of ever using it's full potential, or would it be more reasonable for Nintendo to put in an EDram with moderate bandwidth (@60-80GB/s) knowing that the GPU could use it's potential and still have a little headroom?

Keep in mind, the first option is not efficient and is costly while the second option is both efficient and cost effective. Which of the two viable option sounds like what Nintendo has done and would do?

---

In regards to tessellation;

Tessellation requires a capable GPU and not just bandwidth, you know as well as I do that just having bandwidth does nothing, it's the GPU processors that do the work.  Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering.

So lets put this into perspective, the Titan Black only has 336GB/s of raw bandwidth compared to your WiiU EDram cache of 500GB/s but the Titan still obliterates the WiiU GPU in tessellation. Why, because the Titan black simply has more power processors, nearing 3000, to do this resource heavy task, again showing that processing capabilities are more important to tessellation than pure bandwidth.

Do you know how tessellation works ? 

Well enough, don't get me wrong I'm no expert but then again i'm not claiming to be. So if you feel I misspoke or have info you feel I should know, then hit me up cuz I'm all about learning :)

 

sorry dude, but what megafenix says its right, wii u can use tessleation, the gpu is more than capable to use it, just not at 1080p thats for sure

here

http://hdwarriors.com/shinen-on-the-practical-use-of-adaptive-tessellation-upcoming-games/

 

Exactly right.

Just to be clear I did not say that the WiiU couldn't do tessellation rather that it's limited because of the piss weak GPU, it's for this reason I said "Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering." Key words being 'Practical', and 'Real time'.

The point of tessellation is that you remove the main ram and bandwidth overhead, which is caused by having mutiple level of detailed meshes in ram, and instead you have one mesh which through tessellation and map displacement  yield the same result if not better. The trade off is your free ram and bandwidth but now take a hit on GPU resources....Most often then not on the WiiU, it would be better to use the limited GPU resources to have better frame rates, better Anti-aliasing, or more objects on screen, rather than spend that limited resource on tessellation where the end result could be worse . There of course will be times where tessellation would be used, to the degree it's used will depend on the game and the developer goals, but it won't be for the majority of tittles since again it comes down to practically of tessellation in real time.

My point was used to illustrate to megafenix that bandwidth alone can not do anything to improve graphics (he asserted that tons of bandwidth gives you tessellation), rather the GPU processors are the ones that do the heavy lifting while bandwidth keeps them working, he himself acknowledged this "bandwidth is not going to gie you more power..." http://gamrconnect.vgchartz.com/post.php?id=6101413


just to be clear, its not just shinen comments that gpus now days have less overhead on doing tesselation than previous gpus, but also we have a game that was using tesselation with dispalcements in real time

 

here

shadow of the eternals wii u and pc minute 7:50, listen to what they say about the graphics here until the end

https://www.youtube.com/watch?v=QlREuZz7MwE

 

check iy out

i really dont see a limited gpu if it can produce those graphics with tesselation+displacements, and thats just the beginning cause was using the old cryengine 3 which hasnt been optimized to much for wii u, but over the time will get better

 

wii u gpu should be like 400 to 500 gigaflops, that doesnt sound as much but is not just a matter of power, efficiency counts and tesselators have been improving since the hd2000, hell, even the hd5000 and 6000 gpus have a better tessealtor engine than the hd4000 although the architecture of the simd cores remains almost the same

 

wii u gpu is capable of using tesselation, and nintendo was interested in this techniqe since before the launch of wii, and the prove of that are those patents on displacement mapping and tesselation, surely they wouldnt miss the chance on wii u. Sorry but that a fact, may not be as power hungry as xbox one and ps4 but has enough feats in order to produce good graphics with tesselation+displacements. In what i wont argue is if is capable of doing it at 1080p cause i doubt it and even if it could would be at low and very unstable framerate, but at 720p and between 30 to 60fps is perfectly capable

 

how can you say bandwidth does nothing

so the ps4 would be able to do the same things with a bandwidth like the xbox one of 56GB/s but without having esram?

seriosuly you need to read more about gpus

i recommend things like this

http://www.tomshardware.com/reviews/radeon-hd-4850,1957-5.html

and better this

http://www.realworldtech.com/gpu-memory-bandwidth/

"

In some cases, the GPU with the lower GFLOP/s actually delivers the best performance – which is totally counter-intuitive. One pair of points that perfectly illustrates counter-intuitive behavior is the first two AMD GPUs. The shader arrays provide 432 and 422 GFLOP/s respectively, but the first card only scores 2552 on 3DMark, while the latter scores a significantly higher 3463. One card has ~2% less shader compute, but 36% higher performance. This behavior is hardly isolated to AMD cards either. Three Nvidia GPUs have 192 GFLOP/s throughput in their shader arrays. Two of these cards score 3700 and 3374, while the third is a disappointing 2527. Despite having the same theoretical throughput, one of the cards is 46% faster than another.

What could be responsible for these mysterious and seemingly contradictory results? Looking at the basic architecture of a GPU like AMD’s Cayman , the shader array is just one part of the design. Admittedly it is perhaps the most important, but modern GPUs contain a variety of other hardware including fixed functions like the triangle setup engine, texture caches and sampling units, raster output pipelines (ROPs) and the memory controllers, while also relying on the driver software. Of these different areas, the one that is most critical to performance is the memory controllers and physical interfaces to DRAM. 3D graphics is an incredibly bandwidth hungry workload – to the point that high-end GPUs use bandwidth optimized GDDR5 DRAM rather than the less expensive DDR3 used for system memory. Note that in modern GPUs, each memory interface typically has its own ROPs – so to some extent, memory bandwidth will also take into account some fixed functions as well.

So our initial guess is that when two similar GPUs have substantially different performance, the real cause is the memory interfaces and available bandwidth. This seems eminently reasonable, especially since most CPU performance models also recognize the critical important of memory in determining the behavior of a workload.

"

 

we already know tesselation has a cost, already amd said that a 4870 needs to trade off 30% performance or 33fps for 400x more polygons, but since the wii u gpu is custom we dont really know how much the efficiency has been improved, maybe the tade off is 20% now, who knows but one thing is for sure, nintendo was interested in this technique long ago and surely they wouldnt put that much edram bandwidth for nothing, even 1080p framebuffer just takes up 16MB out of the 32MB+the other 3MB, so surely you can fit textures and vertex texture fetch data,z buffer and other stuff, and using 720p only would take up 7MB which leves both a lot edram memory bndwidth for other stuff and aliviated the power rquired to render by a lot

 

 



To answer your question op, the only reason people thought wiiu edram was weak was because launch ports didn't match 360/ps3 games, so they thought the ram set up wasn't feeding the GPU properly, which at the time many nintendo fans thought it was around 800 gflops gpu, turns out the wiiu gpu has less flops then 360/ps3 but is more efficient, so the ram set is actually perfect for the wiiu and is more then enough to feed the gpu.

 

not exaclty, wii u has more flops than 360 and ps3, about 400 to 500 gigaflops, that why qhick ports work on wii u, and when i say quick i also mean that wii u efficiency is not being used casue the ports force the wii u hardware into doing things at the style wasnt made for and wtill works so surely ports wouldnt even work under those cisrcunstances and they are, so just by logic is obvious that wii u hpacks more power, and that has already been said by many including shinen multimedia, who has sad thatwii u both packs more power than previous gen and also is more advanced and has more bandwidth

 

the 176GB/s is just trolling, is impossible to get your quick ports to even work with that, if iy was a ground up game utilizing the wii u features and using shader model 4 to 5 instead of directx9 parameters and also using something like compute shaders to compensate thrn yea, probably it could,the problem with the cpu that is weaker than the 360 in gigaflops(already many developers have admited they havent used compyte shaders even though the system is capable of it) but since they havent and the game still woks then its obvious the syeyem has to be more pòwerful

 

just take for example the ps3 and xbox 360

which is more powerful in flops?

ps3

which version of bayonetta is the best?

 

why?

because ps3 was a port

but what if for example we make a port from last of us for 360

and when i say a port i mean a very quick and inefficent port like those for wii u, would it even run on 360?

 

wii u is about 2x more powerful in gigaflops than the xbox 360, which isnt much but enough for what was intedned for

what's your source?, 

here is the direct quote from the NFSWU/project car dev  Martin Griffiths

Not sure where they got that info from but the WiiU GPU has 192 Shader units, not 160. It also has 32MB of EDRAM, (the same amount as Xbox One) so comparing just the number of shader units against a PC card doesn't give a representative performance comparison. On the CPU side, WiiU also supports multi-threaded rendering that scales perfectly with the number of cores you throw at it, unlike PC DX11 deferred contexts which don't scale very well. The current WiiU build runs around 18-25fps with 5AI with all post (FXAA/Motion blur etc) enabled, which is fairly good given only the fairly cursory optimisation pass that it's had.


i dont get it, is just a rumor, and i can play that way too but at least with more realiable source

want a source?

http://www.nowgamer.com/news/1999044/wii_u_is_a_truly_powerful_console_more_powerful_than_360_and_ps3_trine_2_dev.html

"

'Wii U Is A Truly Powerful Console, More Powerful Than 360 And PS3' - Trine 2 Dev

Ryan King


"The Wii U is a very modern console with a lot of RAM, which helped us out a lot during development."

Published on Jul 10, 2013

Wii U 'is truly a powerful console' and 'more powerful than 360 and PS3' says a member of the Trine 2 dev team.

Julius Fondem, Marketing Manager at Frozenbyte, was talking to GamersXtreme about the power of the Wii U following the release of launch title Trine 2: Director's Cut and with upcoming title Splot on the horizon for the console.

"We have really enjoyed working with the Wii U hardware," said Fondem.

   

"It was rather easy to port our modern proprietary engine to it, and it does pack the punch to bring to life some really awesome visuals. The Wii U is a very modern console with a lot of RAM which helped us out a lot during development. The hardware capabilities have improved quite a lot from the original Wii, and the Wii U is a truly powerful console. The console is definitely more powerful than the Xbox 360 and PS3."

"

and i have more under the pocket

i dont see your point, thats just a rumor, has noyhing to support it

whats not a rumor is that wii u ports are lazy and dont utilize the wii u new capabilities and force the system into using the same tricks used in 360, eventhough the 360 has more powerful cpu and developers dont use the gpu new features like compute shaders to aliviate that, and you still have the game running almust identical to the other hd console counterparts. In order to match the other hd consoles with just 176gigaflosos you need a ground up game, but a port as lazy as the ones we have seen then no, impossible

 

as i asked you

would the last of us game for ps3 work on the xbox 360 if it was a quick port?



Egann said:
Squeezol said:
Egann said:

That said, I'm still not convinced any amount of difference will amount to a hill of beans, anyway. Graphics weren't the reason last generation ended: memory limits were. Developers just couldn't make big and pretty maps with only 512 MB of RAM. The Wii U has 1.5 GB of RAM, which is not 8 GB, but it's enough.


The Wii U has 2GB of ram. 1GB is used for the system and 1 GB is avaiable for games.


I was under the impression it was 1.5 GB for games, but it really doesn't matter. 

Well, it does quite a lot. That's 1 Wii less of ram (500 somewhat MB) So yeah, it kind of does..




want a source?

http://www.nowgamer.com/news/1999044/wii_u_is_a_truly_powerful_console_more_powerful_than_360_and_ps3_trine_2_dev.html

"

'Wii U Is A Truly Powerful Console, More Powerful Than 360 And PS3' - Trine 2 Dev

Ryan King


"The Wii U is a very modern console with a lot of RAM, which helped us out a lot during development."

Published on Jul 10, 2013

Wii U 'is truly a powerful console' and 'more powerful than 360 and PS3' says a member of the Trine 2 dev team.

Julius Fondem, Marketing Manager at Frozenbyte, was talking to GamersXtreme about the power of the Wii U following the release of launch title Trine 2: Director's Cut and with upcoming title Splot on the horizon for the console.

"We have really enjoyed working with the Wii U hardware," said Fondem.

   

"It was rather easy to port our modern proprietary engine to it, and it does pack the punch to bring to life some really awesome visuals. The Wii U is a very modern console with a lot of RAM which helped us out a lot during development. The hardware capabilities have improved quite a lot from the original Wii, and the Wii U is a truly powerful console. The console is definitely more powerful than the Xbox 360 and PS3."

"

and i have more under the pocket

i dont see your point, thats just a rumor, has noyhing to support it

whats not a rumor is that wii u ports are lazy and dont utilize the wii u new capabilities and force the system into using the same tricks used in 360, eventhough the 360 has more powerful cpu and developers dont use the gpu new features like compute shaders to aliviate that, and you still have the game running almust identical to the other hd console counterparts. In order to match the other hd consoles with just 176gigaflosos you need a ground up game, but a port as lazy as the ones we have seen then no, impossible

 

as i asked you

would the last of us game for ps3 work on the xbox 360 if it was a quick port?

ok so he says the console is more powerful but he doesn't go into detail at all, even i can clearly see that over all the wiiu is a more powerful console because it had double the ram and 2x more edram then360, even if the gpu is in the same ball park as 360/ps3, you can clealy see its still a more powerful console, gimmi a source where it supports your claim about wiiu gpu having 400-500 gflops. and my souce is not a rumor its a direct quote from a wiiu developer.