NVIDIA has released more details about its Neural Texture Compression (NTC) technology, which significantly reduces GPU VRAM usage by up to seven times. In a technology demo presented during one of the GTC 2026 sessions, NVIDIA revealed that its Neural Texture Compression can reduce VRAM usage from ...
You’ve gotta look at the magnitude of difference here, to use a toy example:
Let’s use a memory data rate of 1GB/s for simple maths
8GB of uncompressed textures getting moved would take us about 8 seconds
The compressed stuff at <1GB is going to be done in less than 1 second.
As long as the decompression process doesn’t take another 7 seconds to complete it’s going to be more performant
Edit: typo
Data rates are way higher than that, like 64 times higher for PCI Gen 4 x16. Which would move 64GB/s.
I know, I just picked easy numbers for the sake of discussion. The actual data rate is not important to this particular discussion.
Thanks for the explanation!
Would you happen to have information about any loss from compression or is that kind of thing negligible with that much time for it to unpack?
That would just be my only (uninformed) concern. I already fear we’re going too deep in an era of ‘fake’ things, fake frames, fake 4k, fake lighting through strobing to induce less blur for moving objects (that monitor test was sick, but also I fear eye exhaustion will be a thing). My sibling has a card capable of utilizing new frame gen, and that doesn’t look as bad, but it’s still not visually equal to raw same framerate in terms of clarity for me.
No idea on the loss side of things tbh, though given it’s AI based, I’m assuming it can’t be truly lossless