NVIDIA has released more details about its Neural Texture Compression (NTC) technology, which significantly reduces GPU VRAM usage by up to seven times. In a technology demo presented during one of the GTC 2026 sessions, NVIDIA revealed that its Neural Texture Compression can reduce VRAM usage from ...
This is actually a good idea.
Latent encoding/decoding is basically extreme, “good enough” image compression. It’s a pretty old ML technique, and the models to do it can be small (eg fast-ish). A cross-platform implementation is doable.
If I were a game dev, I wouldn’t overuse it, but it seems like a great way to store huge, somewhat less frequently-used textures in VRAM.
…But one thing I’m wondering is what they decode to? If its a huge bitmap (instead of a compressed texture format), that seems suboptimal. And I don’t think it’s decoded per pixel like BCx compression, unless I’m misremembering how it works. That granular access is kind of the point of texture compression.
That’s one thing that’s been bothering me a lot about this AI worship era that I don’t see mentioned enough anywhere. Machine learning is a fucking incredible tool that can surely be used to do a lot of things in a novel or better way. Instead all of the investment and eyeballs are on overplayed party tricks
That’s what OpenAI wants.
They want “Our AI vs no AI.” They want to stoke the simplistic ML hate because the thing they fear most is it being viewed for what it is: dirt cheap, dumb, albeit useful tools. Like a set of specialized hammers.
It’s what I keep trying to tell the “Fuck AI” crowd, but no one wants to hear it, and they’re playing right into what these stupid Tech Bros want.