By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:

We already have texture compression ratios of 36:1 which beats the Neural texture compression.

The thing with compression is that... The more advanced the technique, the more processing power needed to decompress and "piece back" the dataset.

What will truly be game changing is Neural Texture Generation... You simply won't have texture files anymore, you will simply have a description of what that texture pattern is... And the Neural processing takes care of the rest, procedural generating it on-demand.
We already have procedural texture generation, but it's still fairly rudimentary and limited.
https://en.wikipedia.org/wiki/Procedural_texture

The big advantage of neural compression is that it can retain the quality very well.  An example given in the paper shows us that NTC does pretty well compared to a typical block compression method (BC7), when attempting to memory-match (keep memory size as a constant.) Basically retaining much of the 4096 x 4096 uncompressed image (versus 1024 x 1024 for BC7) while using 70% of the space of the BC method. Also the compression ratio is 256 MB : 3.8 MB ~ 67:1 here. Basically NTC is getting 16 times the texels than BC7, while using less space.  

From the paper

Using this approach we enable low-bitrate compression, unlocking two additional levels of detail (or 16× more texels) with similar
storage requirements as commonly used texture compression techniques. In practical terms, this allows a viewer to get very close to
an object before losing significant texture detail.

Our main contributions are:

• A novel approach to texture compression that exploits redundancies spatially, across mipmap levels, and across different material
channels. By optimizing for reduced distortion at a low bitrate,
we can compress two more levels of details in the same storage as
block-compressed textures. The resulting texture quality at such
aggressively low bitrates is better than or comparable to recent
image compression standards like AVIF and JPEG XL, which are
not designed for real-time decompression with random access.


• A novel low-cost decoder architecture that is optimized specifically for each material. This architecture enables real-time performance for random access and can be integrated into material
shader functions, such as filtering, to facilitate on-demand decompression.


• A highly optimized implementation of our compressor, with fused
backpropogation, enabling practical per-material optimization
with resolutions up to 8192 × 8192 (8k). Our compressor can
process a 9-channel, 4k material texture set in 1-15 minutes on an
NVIDIA RTX 4090 GPU, depending on the desired quality level

Our method can replace GPU texture compression techniques, such
as BC [45] and ASTC [55].

It is a common industry practice to use
different BC variants for different material texture types [16], but
there is no single standard. As such, we propose two compression
profiles for the evaluation of BC, namely “BC medium” and “BC
high.” The BC medium profile uses BC1 for diffuse and other packed
multi-channel textures, BC7 for normals, and BC4 for any remaining
single-channel textures. The BC high profile, on the other hand, uses
BC7 for three-channel textures and BC4 for one-channel textures.
Our method is not directly comparable with compression formats
using entropy encoding, as NTC is designed to support real-time
random access.

Of course the trade-off, and why it likely doesn't have a clean packaged product implementation yet, is inference compute costs compared to BC methods. Nvidia has been steadily improving this over the years, though getting frame-time penalties lower with each iteration. 

I agree though, full neural rendering with mostly generated frames is the game changer. Though advancements like this are stepping stones to that, more or less. 

Last edited by sc94597 - 1 day ago