By using this site, you agree to our Privacy Policy and our Terms of Use. Close
fatslob-:O said:
Pemalite said:

There is Anisotropic filtering... And then there is Anisotropic filtering.

2Gb? Raw maybe, but that's not going to be it's actual size when it comes to rendering time.

As for the GDDR5, economy of scale is what will bring the cost down, it happens with the volatile DRAM market constantly, when there is an abundance of DRAM chips (Costs of production be damned!) then prices drop, this is why DDR3 got to such crazy low prices.

I'm sure devs will be able to pull off 4x anistropic filtering in their game EASILY on the PS4 so long as they have it accounted for in mind. With 72 TMU's, that should come for free. 

Even with DXT5, the texture is still over 500MB!

Economies of scale is only meant to emphasize cost production advantages with higher volumes. Like it or not businesses will try to avoid selling at a loss, otherwise they won't be there for much longer. *cough* AMD *cough* 

The market will correct itself one way or another by closing down manufacturers cause what your describing simply isn't sustainable ... 


I hope 4x Anisotropic isn't all the PS4 pushes out, 8x should be the minimum, 16x should be the goal.
However even at 16x you still have other techniques available to improve the filtering quality, just look at anisotropic quality between nVidia and AMD, despite both having 16x options, they still both offer differing levels of quality.

 

DXT5 assumes a 4:1 compression ratio, there are methods for higher levels of compression, you could for instance setup a 128-bit palletted texture using 4-bits per pixel in the index buffer which would give you a compression ratio of an impressive 32:1, which is ironically possible even on the XBox 360.

Plus, super high-quality textures aren't always needed and not always used anyway, when someones camera gets close-enough to a particular surface, the low quality texture is swapped for a higher quality one, most evident in Unreal powered games, this conserves memory. (You get the side-effect of pop-in unfortunatly, but made well, can be a non-issue.)
Plus, some textures are used multiple times in a single scene, so you don't need a hundred 16k textures for every rock on the ground loaded into memory, just the one texture of a rock which can then be modified with shaders to look different. - Instancing is another method to give some differentiation between the same object like grass, rocks, wood panels on a building etc'.

Besides, 3dc+ should be the new baseline this generation as all consoles and PC hardware support it, which also means more stuff is generally compressed.
On the flipside, we have finally moved past sub-1Gb memory systems, we gotta' use that new memory baselines for something, bigger textures is obviously the first step.

 

Of course it's not sustainable, yet has happened constantly in history in the DRAM market, heck (showing my age now) I remember it happening to EDO Ram.

DRAM manufacturers attempt to predict the levels of demand in advance, this doesn't always align with where the market is or is heading, for example, DRAM manufacturers boosted DDR3 Ram production prior to Window 8's launch in the anticipation that it's memory requirements would increase, that didn't happen and DRAM manufacturers the world-over, cried. (Windows 8's lack of high-demand didn't help things either.)

Sometimes a new DRAM standard is launched, then the company's switch the factory's over to the new standard in order to capitalise on the initial high-prices, because of such there is more DRAM being produced than the market actually needs, flooding the marketplace causing prices to drop.

In both cases, companies end up with massive piles of DRAM sitting around doing nothing, which means no revenue and thus prices drop, this is why 8Gb of DDR3 got to as low as $30-$40 at one point then quad-drupled in price once manufacturing capacity decreased. (It's sitting at about $100 AUD now that the market has settled.)

GDDR5 production will continue to increase as the Playstation 4 continues steaming ahead and as nVidia and AMD eventually release more high-volume graphics processors (Aka. Lower-end) with GDDR5 memories, then the market will eventually "pop" once nVidia and AMD shift to DDR6/Some other memory standard and the low-end shifts to cheaper DDR4. (Once scales of economy cheapens DDR4 that is.)

Again, this happened constantly through history, DRAM is a voltatile market.

maxima64 said:


Wasnt that his pint all the time?

the less passes the better(of course as long as the end result is almost the same)?

even if its true that there are still rendering passes for the nearby objects would you still prefer to waist rendering passes for both the neraby objects and the ones that are distant to you and can barely be seen?


Rendering in frametime is the main thing you need to look at, not the amount of passes.

Some hardware is better equipped to handle lots of small passes rather than one large one.
Thus, one large pass may take 14ms where-as 6 smaller passes can take 2ms each for 12ms, both of which fit in the 16ms time frame required for 60fps, which gives you wiggle room to add more.

Of course, some other hardware will handle less passes better, some game engines simply require lots of passes out of necessity.

Again, there are more shades of grey than what is being painted with the black and white brush.



--::{PC Gaming Master Race}::--