That’s a big reduction in VRAM usage, but how much processing power does it take to decompress the textures? It’s worthless if it tanks the frame rate. The lower end GPUs that would benefit the most from this aren’t going to have a lot of processing power to spare.
You’ve gotta look at the magnitude of difference here, to use a toy example:
Let’s use a memory data rate of 1GB/s for simple maths
8GB of uncompressed textures getting moved would take us about 8 seconds
The compressed stuff at <1GB is going to be done in less than 1 second.
As long as the decompression process doesn’t take another 7 seconds to complete it’s going to be more performant
Edit: typo
Data rates are way higher than that, like 64 times higher for PCI Gen 4 x16. Which would move 64GB/s.
to use a toy example […] for simple maths
I know, I just picked easy numbers for the sake of discussion. The actual data rate is not important to this particular discussion.
Thanks for the explanation!
Would you happen to have information about any loss from compression or is that kind of thing negligible with that much time for it to unpack?
That would just be my only (uninformed) concern. I already fear we’re going too deep in an era of ‘fake’ things, fake frames, fake 4k, fake lighting through strobing to induce less blur for moving objects (that monitor test was sick, but also I fear eye exhaustion will be a thing). My sibling has a card capable of utilizing new frame gen, and that doesn’t look as bad, but it’s still not visually equal to raw same framerate in terms of clarity for me.
No idea on the loss side of things tbh, though given it’s AI based, I’m assuming it can’t be truly lossless
It’s hardware accelerated.
Processors mostly sit idle waiting for memory.
I really hope the competition steps up their game soon… this monopoly is cancer and nvidia gets better at tech and shittier as a company by the second. At this point only DLSS 5 seems to scratch on their invincibility cloak. Jensen needs to be brought down to earth.
Problem is that even when the competition is ahead, people continue to buy Nvidia.
Texture compression is nothing new, and back in the day ATI was ahead of Nvidia on this.Not really. They went back and forth, though. I’d argue Nvidia has generally had more efficient and aggressive memory compression(not just textures!)
You obviously isn’t aware of the sales statistics, AMD may win a tiny bit of marketshare when they are in the lead against Nvidia, but even when AMD was the clearly better cards and value, Nvidia maintained the bigger market share.
his new jacks provide all the deflection capabilities.
Yet another AI enshittification on top of visual artists hard work.
“NTC drastically reduces VRAM usage by emulating textures, allowing for either much lower VRAM consumption or significantly enhanced material appearance, depending on the game developer’s goals.”
We are just emulating textures, trust me bro.
This has nothing to do with artistic creation.
Textures are created by visual artists, this technology shits all over that work.
This has nothing to do with creating textures. It’s a texture compression algorithm.
I guess you didn’t bother to read the article.
“This process is so refined that the output can either provide a more realistic version on top of the base texture layer that the game uses or maintain the same texture appearance for significant VRAM savings.”
What is more “realistic version on top of the base texture layer” if not AI enshittification? It either compresses the textures, when has that ever maintained the original quality, or layers AI layer of “better” textures on top of the original ones. Shit vs shit.
If you look at the NVIDIA presentation (linked in the article), they explain how they quantify realism at around minute 16. Also audio and video compression doesn’t aim for original quality, it aims for lower quality that is visually or acoustically“the same” from the human perspective, not the computer. Just because it’s AI doesn’t mean it’s garbage.
So no original quality in any option, only worse or AI “enhanced” as I said: enshittified. They’re trying to push the “enhanced” shit by lowering the quality of the original vision.
The original quality will be there when you turn this feature off. It’s safe to assume this will just be part of a future DLSS version
Why not?
So, nvidia, a gpu maker, invents a technology that reduces memory needs in exchange for more gpu needs. Bonus points for AI.
Nvidia has always had strong real time hardware accelerated memory compression.
Compute is basically a free lunch compared to memory bottlenecks. And individual textures will probably fit in low level caches, which allows the compute to flex.
doesn’t that end up using more energy? I thought compute was more power intensive than accessing memory.
Compute units are significant, but so are the caches and i/o.
Real world datasets tend to have a lot of sparsity.
One of the biggest problems is called a page fault. Which is basically when the app needs to go to storage to find data to continue execution. This results in the processor waiting, which isn’t free.
Generally, I’d say they go hand in hand about 50/50 plus or minus 10%.
One benchmarks can fit in L1 cache and really stress the cores but most benchmarks you’re using all the levels of cache, ram, IO, etc… which is a hell of a lot compared to just a little bit of processor in that chip. GPUs again are often just massive collaborations of massive throughout and compute. So it can be to really separate.
And legit. Most data is compressed these days on the Internet. So that’s again compute used to save data in flight. It’s a neverending tradeoff.
Novidia will do anything but bump the vram on consumer models. CG artists be dammed.
Ah great, just what we need </s>, “AI” slop on the textures too.
– Frost
This is more like MP3 or H264, a compression algorithm that strives to be perceptually lossless mid to high end.
For much of mainstream music, on a vast majority of speaker/headphones it’s difficult to tell a 256 KB MP3 and a FLAC.
This is sort of similar. Just read up on it.
Finally! An excuse for AAA studios to 10-fold game size!
That is not how V-Ram works. Also the compression could probably also be used in the file so you would use less memory and less VRAM.

So finally my 4GB VRAM gpu is not gonna be useless?









