jake_the_fake1 said:
tanok said:
jake_the_fake1 said:
fatslob-:O said:
jake_the_fake1 said:
Exactly right.
Now that we are in agreement, and since you haven't rebutted by comments on the WiiU GPU being piss weak, I can assume that we are also in agreement there too.
We can finally move forward and look at what would be more reasonable, would it be more reasonable for Nintendo to pair a piss weak GPU with Ultra high bandwidth EDram (@500GB/s) despite the GPU being incapable of ever using it's full potential, or would it be more reasonable for Nintendo to put in an EDram with moderate bandwidth (@60-80GB/s) knowing that the GPU could use it's potential and still have a little headroom?
Keep in mind, the first option is not efficient and is costly while the second option is both efficient and cost effective. Which of the two viable option sounds like what Nintendo has done and would do?
---
In regards to tessellation;
Tessellation requires a capable GPU and not just bandwidth, you know as well as I do that just having bandwidth does nothing, it's the GPU processors that do the work. Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering.
So lets put this into perspective, the Titan Black only has 336GB/s of raw bandwidth compared to your WiiU EDram cache of 500GB/s but the Titan still obliterates the WiiU GPU in tessellation. Why, because the Titan black simply has more power processors, nearing 3000, to do this resource heavy task, again showing that processing capabilities are more important to tessellation than pure bandwidth.
|
Do you know how tessellation works ?
|
Well enough, don't get me wrong I'm no expert but then again i'm not claiming to be. So if you feel I misspoke or have info you feel I should know, then hit me up cuz I'm all about learning :)
|
sorry dude, but what megafenix says its right, wii u can use tessleation, the gpu is more than capable to use it, just not at 1080p thats for sure
here
http://hdwarriors.com/shinen-on-the-practical-use-of-adaptive-tessellation-upcoming-games/
|
Exactly right.
Just to be clear I did not say that the WiiU couldn't do tessellation rather that it's limited because of the piss weak GPU, it's for this reason I said "Since tessellation is GPU processor heavy it really requires a powerful GPU to actually make tessellation both useful and practical for real time rendering." Key words being 'Practical', and 'Real time'.
The point of tessellation is that you remove the main ram and bandwidth overhead, which is caused by having mutiple level of detailed meshes in ram, and instead you have one mesh which through tessellation and map displacement yield the same result if not better. The trade off is your free ram and bandwidth but now take a hit on GPU resources....Most often then not on the WiiU, it would be better to use the limited GPU resources to have better frame rates, better Anti-aliasing, or more objects on screen, rather than spend that limited resource on tessellation where the end result could be worse . There of course will be times where tessellation would be used, to the degree it's used will depend on the game and the developer goals, but it won't be for the majority of tittles since again it comes down to practically of tessellation in real time.
My point was used to illustrate to megafenix that bandwidth alone can not do anything to improve graphics (he asserted that tons of bandwidth gives you tessellation), rather the GPU processors are the ones that do the heavy lifting while bandwidth keeps them working, he himself acknowledged this "bandwidth is not going to gie you more power..." http://gamrconnect.vgchartz.com/post.php?id=6101413
|
just to be clear, its not just shinen comments that gpus now days have less overhead on doing tesselation than previous gpus, but also we have a game that was using tesselation with dispalcements in real time
here
shadow of the eternals wii u and pc minute 7:50, listen to what they say about the graphics here until the end
https://www.youtube.com/watch?v=QlREuZz7MwE
check iy out
i really dont see a limited gpu if it can produce those graphics with tesselation+displacements, and thats just the beginning cause was using the old cryengine 3 which hasnt been optimized to much for wii u, but over the time will get better
wii u gpu should be like 400 to 500 gigaflops, that doesnt sound as much but is not just a matter of power, efficiency counts and tesselators have been improving since the hd2000, hell, even the hd5000 and 6000 gpus have a better tessealtor engine than the hd4000 although the architecture of the simd cores remains almost the same
wii u gpu is capable of using tesselation, and nintendo was interested in this techniqe since before the launch of wii, and the prove of that are those patents on displacement mapping and tesselation, surely they wouldnt miss the chance on wii u. Sorry but that a fact, may not be as power hungry as xbox one and ps4 but has enough feats in order to produce good graphics with tesselation+displacements. In what i wont argue is if is capable of doing it at 1080p cause i doubt it and even if it could would be at low and very unstable framerate, but at 720p and between 30 to 60fps is perfectly capable
how can you say bandwidth does nothing
so the ps4 would be able to do the same things with a bandwidth like the xbox one of 56GB/s but without having esram?
seriosuly you need to read more about gpus
i recommend things like this
http://www.tomshardware.com/reviews/radeon-hd-4850,1957-5.html
and better this
http://www.realworldtech.com/gpu-memory-bandwidth/
"
In some cases, the GPU with the lower GFLOP/s actually delivers the best performance – which is totally counter-intuitive. One pair of points that perfectly illustrates counter-intuitive behavior is the first two AMD GPUs. The shader arrays provide 432 and 422 GFLOP/s respectively, but the first card only scores 2552 on 3DMark, while the latter scores a significantly higher 3463. One card has ~2% less shader compute, but 36% higher performance. This behavior is hardly isolated to AMD cards either. Three Nvidia GPUs have 192 GFLOP/s throughput in their shader arrays. Two of these cards score 3700 and 3374, while the third is a disappointing 2527. Despite having the same theoretical throughput, one of the cards is 46% faster than another.
What could be responsible for these mysterious and seemingly contradictory results? Looking at the basic architecture of a GPU like AMD’s Cayman , the shader array is just one part of the design. Admittedly it is perhaps the most important, but modern GPUs contain a variety of other hardware including fixed functions like the triangle setup engine, texture caches and sampling units, raster output pipelines (ROPs) and the memory controllers, while also relying on the driver software. Of these different areas, the one that is most critical to performance is the memory controllers and physical interfaces to DRAM. 3D graphics is an incredibly bandwidth hungry workload – to the point that high-end GPUs use bandwidth optimized GDDR5 DRAM rather than the less expensive DDR3 used for system memory. Note that in modern GPUs, each memory interface typically has its own ROPs – so to some extent, memory bandwidth will also take into account some fixed functions as well.
So our initial guess is that when two similar GPUs have substantially different performance, the real cause is the memory interfaces and available bandwidth. This seems eminently reasonable, especially since most CPU performance models also recognize the critical important of memory in determining the behavior of a workload.
"
we already know tesselation has a cost, already amd said that a 4870 needs to trade off 30% performance or 33fps for 400x more polygons, but since the wii u gpu is custom we dont really know how much the efficiency has been improved, maybe the tade off is 20% now(there are parts in the gpu that have not been identified and even those which have been give a sense of doubt because may be similar but not eactly the same), who knows but one thing is for sure, nintendo was interested in this technique long ago and surely they wouldnt put that much edram bandwidth for nothing, even 1080p framebuffer just takes up 16MB out of the 32MB+the other 3MB, so surely you can fit textures and vertex texture fetch data,z buffer and other stuff, and using 720p only would take up 7MB which leves both a lot edram memory bndwidth for other stuff and aliviated the power rquired to render by a lot