By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - FAST Racing NEO powered by 2nd generation engine for Wii U supports and uses 4k-8k textures

fatslob-:O said:
curl-6 said:
fatslob-:O said:
The real question here is the game going to show all 4K-8K textures at once ? I presume that they won't because the WII U is severely lacking memory bandwidth, memory size, as well as TMUs to even attempt showing every detail of the texture. The engine that they have is probably modified to support john carmacks megatexture technology to dynamically stream textures at a different resolution.

Shin'en have said on twitter that memory size was not a problem, as they could compress a 4k texture down to 10MB.

As for memory bandwidth they previously said that:

"Theoretical RAM bandwidth in a system doesn’t tell you too much because GPU caching will hide a lot of this latency."

Just how many 4-8K textures are they going to use then ? Once you get alot of objects on screen the complexity increases. 

As for shin'en's over statement about memory bandwidth, there's only so much you can do with 32MB. See the reason as to why bandwidth was important in the first place was to feed the GPU otherwise functional units will start to get under utilized. TMUs, shaders, ROPS, and everything else in the GPU is exetremely dependent on it. The reason as to why the X1 wasn't able to achieve 1080p for some of multiplatform titles or exclusives has to do with the main memory bandwidth being a bottleneck. (Aside from the lowered amount of ROPS ofcourse.) How else does the GPU get fed with alot of other data ? You can not keep constantly relying on the eDRAM to feed the GPU much like how the X1 relies on the eSRAM! It has to eventually access the main memory and only the main memory because it likely has the most data being resident on it. The purpose of caching is to SAVE BANDWIDTH by storing frequently accessed data. It was not meant to COMPLETELY FEED THE GPU

Now don't get me wrong! I'm not saying that the WII U isn't capable of handling 4-8K textures but it shouldn't be able to handle it at a regular basis given that it likely has a lack of TMUs and everything else I stated before. Do not fret about this issue. There are other ways of about solving this issue like I had said before. The megatexture technology introduced by john carmack in RAGE will resolve alot of issues regarding the WII Us lack of bandwidth and TMUs by trying to only stream the required highest resolution textures for a certain assests of a scene so that it can conserve alot of texture fillrates ad bandwidth so that the level of detail gets scaled back when objects are far away while also being scaled up too where the objects near the camera will have the highest level of detail. 

The car/spaceship models will probably always be 4-8K(maybe 8K at small numbers and 4 at larger numbers) and other significant stuff will be in those resolutions while the rest will be 1080p as far as i can tell



bet with ash3336 he wins if Super Mario 3D World sells less than Mario Sunshine during the first three years, I win if 3D World outsells Sunshine's first 3 years. Loser get sig controlled for 3 months (punishment might change)

Do you doubt the WiiU? Do you believe that it won't match the GC?

Then come and accept this bet

Nintendo eShop USA sales ranking 2/12/2013 : http://gamrconnect.vgchartz.com/thread.php?id=173615

Around the Network

a renesas edram composed of 32 macros each 1 megabyte and of 256bits each shoul do
thats 563GB/s with a total of 8192bits wide edram of 32megabytes

renesas already said that wii u packs the best of their technologies at the point that making it elsewhere would be difficult

shinen said that wii u edram has lots of bandwidth and even is sacry

xbox had an edram of 4096 bits with 256GB/s that could be fully accessed by its 8 ROPS, the only drawback was that the work done by the rops was later have to be passed to gpu via an external bus of 32GB/s, but that doesnt make the actual bandwidth of tje edram be 32GB/s when you have the rops inside the edram and have full access to the internal bandwidth, we could consider the external bus as something as latency between the rops and the framebuffer of the gpu

renesas has 1024 bis,4096 bits and 8192 bist for an edram of 32megabytes
1024 bits?, yea right, as it was latest technoligy fro renesas and would provide lots of bandwidth, and even gamecubed packed 512 bits wide embedded memory and that was more than a decade ago
4096bits,? not bad but is 7 years old and not latests technology, and doesnt provide a bandwidth that could be consider huge or scary as shinen coments

8192bits? sounds about right, that gives 563GB/s of bandwidth
still, i would have liked a terabyte of bandwidth as sony was pretending for its next console, bt well, 563GB/s should do as long as developers take adventage of it




megafenix said:
fatslob-:O said:
Well god be damned! I made alot of people really mad here while exposing frauds all at the same time LMAO.


talk about fruads like edram for just cache when shinen already said is more than that

want me to prove it

even by today standards, gpus pack only kilobytes for caching, not megabytes

do you want a proof on this?

 

answer quicly, or i will be forced to show everyone here proof of your bullshit and a backup for shinen as well diredctly from ati

 

will you apologize or continue barking?

decide already, i have wasted many posts asking you this and you just continue avoiding it

 

very well, i will have to lift my hand it seems

you have 5 minutes

apologize and recognize your mistake about caching, or make my checkmate move to expose you even more

seems you  like to be beaten up one time and another

 

well, time is up, sorry dude, you did this to yourself

 

 

i forgot, you actually dont like to investigate, dont worry, here

 

so, are 32 megabytes really that tiny compared to the internal caches of the gpus for tmus and simd cores?

32 megabytes vs 8KB,16KB or 64Kb?

 

who wins?

if the l1 cache, texture caches and the global data share are only some kilobytes for caching, shouldnt the 32 megabytes of edra be considered assomething like ram for the gpu?

dont forget that there ar additional 3 megabytes of faster edram+sram, so in reality we are talking about 35 megabytes of embedded memory

 

and this is howthe great expert in pc tech stuff falls

or maygbe this is how a big troll falls

@Bold Damn your understanding of cache is extremely limited LOL. Must suck not to know anything on this subject. 

http://www.merriam-webster.com/dictionary/cache

This post tells me you have a severe lack of understanding memory hierarchies. The reason why 32MB of eDRAM is considered cache and not main memory has to do with the fact that it has a lower access time than the main memory as evidenced to it being integrated on GPU. The intel iris pro 5200 treats it's own 128MB eDRAM as an L4 cache because it's not a part of the main memory. It is instead used to buffer extra pieces of data that the L3 cache requested. 

Your the one that supposed to owe me an apology for your atrocious behaviour. 



fatslob-:O said:
megafenix said:
fatslob-:O said:
Well god be damned! I made alot of people really mad here while exposing frauds all at the same time LMAO.


talk about fruads like edram for just cache when shinen already said is more than that

want me to prove it

even by today standards, gpus pack only kilobytes for caching, not megabytes

do you want a proof on this?

 

answer quicly, or i will be forced to show everyone here proof of your bullshit and a backup for shinen as well diredctly from ati

 

will you apologize or continue barking?

decide already, i have wasted many posts asking you this and you just continue avoiding it

 

very well, i will have to lift my hand it seems

you have 5 minutes

apologize and recognize your mistake about caching, or make my checkmate move to expose you even more

seems you  like to be beaten up one time and another

 

well, time is up, sorry dude, you did this to yourself

 

 

i forgot, you actually dont like to investigate, dont worry, here

 

so, are 32 megabytes really that tiny compared to the internal caches of the gpus for tmus and simd cores?

32 megabytes vs 8KB,16KB or 64Kb?

 

who wins?

if the l1 cache, texture caches and the global data share are only some kilobytes for caching, shouldnt the 32 megabytes of edra be considered assomething like ram for the gpu?

dont forget that there ar additional 3 megabytes of faster edram+sram, so in reality we are talking about 35 megabytes of embedded memory

 

and this is howthe great expert in pc tech stuff falls

or maygbe this is how a big troll falls

@Bold Damn your understanding of cache is extremely limited LOL. Must suck not to know anything on this subject. 

http://www.merriam-webster.com/dictionary/cache

This post tells me you have a severe lack of understanding memory hierarchies. The reason why 32MB of eDRAM is considered cache and not main memory has to do with the fact that it has a lower access time than the main memory as evidenced to it being integrated on GPU. The intel iris pro 5200 treats it's own 128MB eDRAM as an L4 cache because it's not a part of the main memory. It is instead used to buffer extra pieces of data that the L3 cache requested. 

Your the one that supposed to owe me an apology for your atrocious behaviour. 

this tell me you dont get the point

i am basically saying that since the  gpus have internal caches of just some kilobytes

shouldnt the 32megabytes of edram be considered like a ra for the gpu?

not saying is a real ram, is just an analogy dude

seriously, you lack of undesrtanding is awful

and now i have already proven to you that cache for gpu is more in the line of kilobytes and not megabytes

just look at how much youy got for texture units spus and other stuff

only 8 kilobytes of texture cache

only 64kilobytes of global data share

about 8 to 16 kilobyte of local data share

 

can you copare that to 32megabytes?

no

sorry dude, no atter what you say all people here have watched your previous staetents

 

nd again you forget this

http://hdwarriors.com/why-the-wii-u-is-probably-more-capable-than-you-think-it-is/

"

The easiest way that I can explain this is that when you take each unit of time that the Wii U eDRAM can do work with separate tasks as compared to the 1 Gigabyte of slower RAM, the amount of actual Megabytes of RAM that exist during the same time frame is superior with the eDRAM, regardless of the fact that the size and number applied makes the 1 Gigabyte of DDR3 RAM seem larger. These are units of both time and space. Fast eDRAM that can be used at a speed more useful to the CPU and GPU have certain advantages, that when exploited, give the console great gains in performance.

The eDRAM of the Wii U is embedded right onto the chip logic, which for most intent and purposes negates the classic In/Out bottleneck that developers have faced in the past as well. Reading and writing directly in regard to all of the chips on the Multi Chip Module as instructed.

"

 

read dude, read

your lack of understanding is awful



megafenix said:
a renesas edram composed of 32 macros each 1 megabyte and of 256bits each shoul do
thats 563GB/s with a total of 8192bits wide edram of 32megabytes

renesas already said that wii u packs the best of their technologies at the point that making it elsewhere would be difficult

shinen said that wii u edram has lots of bandwidth and even is sacry

xbox had an edram of 4096 bits with 256GB/s that could be fully accessed by its 8 ROPS, the only drawback was that the work done by the rops was later have to be passed to gpu via an external bus of 32GB/s, but that doesnt make the actual bandwidth of tje edram be 32GB/s when you have the rops inside the edram and have full access to the internal bandwidth, we could consider the external bus as something as latency between the rops and the framebuffer of the gpu

renesas has 1024 bis,4096 bits and 8192 bist for an edram of 32megabytes
1024 bits?, yea right, as it was latest technoligy fro renesas and would provide lots of bandwidth, and even gamecubed packed 512 bits wide embedded memory and that was more than a decade ago
4096bits,? not bad but is 7 years old and not latests technology, and doesnt provide a bandwidth that could be consider huge or scary as shinen coments

8192bits? sounds about right, that gives 563GB/s of bandwidth
still, i would have liked a terabyte of bandwidth as sony was pretending for its next console, bt well, 563GB/s should do as long as developers take adventage of it


BTW if you want a reason as to why Pemalite thinks nintendo didn't use anything above 1024 bit bus width that's because he has stated before that increasing the bus width will also mean increasing cache controller complexity and I doubt nintendo spends a shit ton of R&D for hardware. 

Once again you still don't understand why the eDRAM is of less importance, eh ? Tell me why the X1 has issues hitting 1080p for every game ? 



Around the Network
fatslob-:O said:
megafenix said:
a renesas edram composed of 32 macros each 1 megabyte and of 256bits each shoul do
thats 563GB/s with a total of 8192bits wide edram of 32megabytes

renesas already said that wii u packs the best of their technologies at the point that making it elsewhere would be difficult

shinen said that wii u edram has lots of bandwidth and even is sacry

xbox had an edram of 4096 bits with 256GB/s that could be fully accessed by its 8 ROPS, the only drawback was that the work done by the rops was later have to be passed to gpu via an external bus of 32GB/s, but that doesnt make the actual bandwidth of tje edram be 32GB/s when you have the rops inside the edram and have full access to the internal bandwidth, we could consider the external bus as something as latency between the rops and the framebuffer of the gpu

renesas has 1024 bis,4096 bits and 8192 bist for an edram of 32megabytes
1024 bits?, yea right, as it was latest technoligy fro renesas and would provide lots of bandwidth, and even gamecubed packed 512 bits wide embedded memory and that was more than a decade ago
4096bits,? not bad but is 7 years old and not latests technology, and doesnt provide a bandwidth that could be consider huge or scary as shinen coments

8192bits? sounds about right, that gives 563GB/s of bandwidth
still, i would have liked a terabyte of bandwidth as sony was pretending for its next console, bt well, 563GB/s should do as long as developers take adventage of it


BTW if you want a reason as to why Pemalite thinks nintendo didn't use anything above 1024 bit bus width that's because he has stated before that increasing the bus width will also mean increasing cache controller complexity and I doubt nintendo spends a shit ton of R&D for hardware. 

Once again you still don't understand why the eDRAM is of less importance, eh ? Tell me why the X1 has issues hitting 1080p for every game ? 

 

 

pealtie vs renesas

pe,altie vs shinen

sorry, cant believe hi, besides, gaecube had 512 bits of ebedded eory

adecade and only 1024 bits?

yea right

xbox 360 was 4096 bits, anywhere below that is bullshit

8192  bits its the right answer, didnt shinen already schooled youa nd your friend?

renesas latest technoplogy to the point aking it its difficult at other place

shinen saying that wii u edra andwidth is huge and scary

 

nop, i dont care who that guy is when he says bullshit

can be intelligent and all what you want, but clearly is not using that intelligence for the sake of the truth

 

with your coments and his coments, this is what i seen in you

 

 

 

to bad i am this guy

 

 

why xbox has issues?

mm, guess you should ask your daddy sony on the matter

didnt i ention that earlier?

wiiu ports coe fro 360, so the  original code just considers 1 ega of cache and only 10 egas of edram and no dsp for sound

all whaty could have been fitted in that additional cache and edram goes to main ram, and one of the espresso cores is wauted for sound insteadof using the dsp and uing the extra core for soething else

 

developers have to rework the source code and reallocate resources to ake a good port, but obviously ust ofn the dont bother and force the syste to squeeze the additional ram instead

edra and esra and trickier than ram, thats why many developers try to avoid them, sony has a qote on this



megafenix said:

this tell me you dont get the point

i am basically saying that since the  gpus have internal caches of just some kilobytes

shouldnt the 32megabytes of edram be considered like a ra for the gpu?

not saying is a real ram, is just an analogy dude

seriously, you lack of undesrtanding is awful

and now i have already proven to you that cache for gpu is more in the line of kilobytes and not megabytes

just look at how much youy got for texture units spus and other stuff

only 8 kilobytes of texture cache

only 64kilobytes of global data share

about 8 to 16 kilobyte of local data share

 

can you copare that to 32megabytes?

no

sorry dude, no atter what you say all people here have watched your previous staetents

Again get this through your thick skull. There exists CACHE HIERACHERIES, your excuses as to why caches should be in kilobytes and not megabytes is completely stupid. 

Whether the cache is in megabytes or kilobytes does not matter because what's defining about the cache is the ACCESS TIMES. Again fraud, why does the intel iris pro treat it's eDRAM as an L4 cache ? 

It's not ram because it has a much faster access times than the main memory. 



fatslob-:O said:
megafenix said:

this tell me you dont get the point

i am basically saying that since the  gpus have internal caches of just some kilobytes

shouldnt the 32megabytes of edram be considered like a ra for the gpu?

not saying is a real ram, is just an analogy dude

seriously, you lack of undesrtanding is awful

and now i have already proven to you that cache for gpu is more in the line of kilobytes and not megabytes

just look at how much youy got for texture units spus and other stuff

only 8 kilobytes of texture cache

only 64kilobytes of global data share

about 8 to 16 kilobyte of local data share

 

can you copare that to 32megabytes?

no

sorry dude, no atter what you say all people here have watched your previous staetents

Again get this through your thick skull. There exists CACHE HIERACHERIES, your excuses as to why caches should be in kilobytes and not megabytes is completely stupid. 

Whether the cache is in megabytes or kilobytes does not matter because what's defining about the cache is the ACCESS TIMES. Again fraud, why does the intel iris pro treat it's eDRAM as an L4 cache ? 

It's not ram because it has a much faster access times than the main memory. 


here we go again

"

The easiest way that I can explain this is that when you take each unit of time that the Wii U eDRAM can do work with separate tasks as compared to the 1 Gigabyte of slower RAM, the amount of actual Megabytes of RAM that exist during the same time frame is superior with the eDRAM, regardless of the fact that the size and number applied makes the 1 Gigabyte of DDR3 RAM seem larger. These are units of both time and space. Fast eDRAM that can be used at a speed more useful to the CPU and GPU have certain advantages, that when exploited, give the console great gains in performance.

The eDRAM of the Wii U is embedded right onto the chip logic, which for most intent and purposes negates the classic In/Out bottleneck that developers have faced in the past as well. Reading and writing directly in regard to all of the chips on the Multi Chip Module as instructed.

"

 

and why we talk about haswell anyway?

i was talking about gpus for pcs, nothing else



megafenix said:
fatslob-:O said:
megafenix said:
a renesas edram composed of 32 macros each 1 megabyte and of 256bits each shoul do
thats 563GB/s with a total of 8192bits wide edram of 32megabytes

renesas already said that wii u packs the best of their technologies at the point that making it elsewhere would be difficult

shinen said that wii u edram has lots of bandwidth and even is sacry

xbox had an edram of 4096 bits with 256GB/s that could be fully accessed by its 8 ROPS, the only drawback was that the work done by the rops was later have to be passed to gpu via an external bus of 32GB/s, but that doesnt make the actual bandwidth of tje edram be 32GB/s when you have the rops inside the edram and have full access to the internal bandwidth, we could consider the external bus as something as latency between the rops and the framebuffer of the gpu

renesas has 1024 bis,4096 bits and 8192 bist for an edram of 32megabytes
1024 bits?, yea right, as it was latest technoligy fro renesas and would provide lots of bandwidth, and even gamecubed packed 512 bits wide embedded memory and that was more than a decade ago
4096bits,? not bad but is 7 years old and not latests technology, and doesnt provide a bandwidth that could be consider huge or scary as shinen coments

8192bits? sounds about right, that gives 563GB/s of bandwidth
still, i would have liked a terabyte of bandwidth as sony was pretending for its next console, bt well, 563GB/s should do as long as developers take adventage of it


BTW if you want a reason as to why Pemalite thinks nintendo didn't use anything above 1024 bit bus width that's because he has stated before that increasing the bus width will also mean increasing cache controller complexity and I doubt nintendo spends a shit ton of R&D for hardware. 

Once again you still don't understand why the eDRAM is of less importance, eh ? Tell me why the X1 has issues hitting 1080p for every game ? 

 

 

pealtie vs renesas

pe,altie vs shinen

sorry, cant believe hi, besides, gaecube had 512 bits of ebedded eory

adecade and only 1024 bits?

yea right

xbox 360 was 4096 bits, anywhere below that is bullshit

8192  bits its the right answer, didnt shinen already schooled youa nd your friend?

renesas latest technoplogy to the point aking it its difficult at other place

shinen saying that wii u edra andwidth is huge and scary

 

nop, i dont care who that guy is when he says bullshit

can be intelligent and all what you want, but clearly is not using that intelligence for the sake of the truth

 

with your coments and his coments, this is what i seen in you

 

 

 

to bad i am this guy

 

 

why xbox has issues?

mm, guess you should ask your daddy sony on the matter

didnt i ention that earlier?

wiiu ports coe fro 360, so the  original code just considers 1 ega of cache and only 10 egas of edram and no dsp for sound

all whaty could have been fitted in that additional cache and edram goes to main ram, and one of the espresso cores is wauted for sound insteadof using the dsp and uing the extra core for soething else

 

developers have to rework the source code and reallocate resources to ake a good port, but obviously ust ofn the dont bother and force the syste to squeeze the additional ram instead

edra and esra and trickier than ram, thats why many developers try to avoid them, sony has a qote on this

Damn your mad alright LOL. I don't care what shin'en says. Why don't you fight for yourself instead following your leader of the village and to think an apologist like Wyrdness calls me some peasant when he's a corporate cheerleader for nintendo LMAO. 

You dodging my legitimate question like some who didn't know jack shit. That credibility going away pretty fast on your part LMAO. 

Looks like you'll have a hard time excepting that reality. Now deal with it! Bwahahaha.



megafenix said:


here we go again

"

The easiest way that I can explain this is that when you take each unit of time that the Wii U eDRAM can do work with separate tasks as compared to the 1 Gigabyte of slower RAM, the amount of actual Megabytes of RAM that exist during the same time frame is superior with the eDRAM, regardless of the fact that the size and number applied makes the 1 Gigabyte of DDR3 RAM seem larger. These are units of both time and space. Fast eDRAM that can be used at a speed more useful to the CPU and GPU have certain advantages, that when exploited, give the console great gains in performance.

The eDRAM of the Wii U is embedded right onto the chip logic, which for most intent and purposes negates the classic In/Out bottleneck that developers have faced in the past as well. Reading and writing directly in regard to all of the chips on the Multi Chip Module as instructed.

"

Do you even read what you post ? 

This crappy post has absolutely nothing with what I stated.