Double Check Your NFS timeouts to your NAS arent an NFS problem. They might be a dirty page writeback problem.

I’m really Sorry in advance for the wall of text here. I debated trimming this down but honestly the whole reason I spent months stuck on this is because nothing about it was obvious. The symptoms point you at NFS, your mount options, your network, everything except whats actually wrong. And because the defaults that cause it ship with basically every linux distro, Id bet money theres a ton of people out there with the same problem right now just blaming thier NAS or Jellyfin or whatever. For all I know this is common knowledge and I’m just the last person to figure it out, but on the off chance somebody else is out there googling the same NFS timeout errors I was, heres the full story. (TL;DR Below)

Ive been chasing NFS issues on my Proxmox cluster for months now and I finally found the actual cause, and it wasnt anything Id seen anyone talk about online. Figured Id write it up because I guarantee other people are hitting this exact same wall.

The setup: half a dozen VMs on Proxmox, all mounting a Synology NAS over NFS. Jellyfin, Audiobookshelf, Sonarr, Radarr, the usual self-hosted media stack. Things would work fine for a while and then randomly go sideways. Jellyfin stops mid-playback. Audiobookshelf loses track of where you were. Sonarr tries to import a downloaded episode and the entire container locks up. dmesg fills with nfs: server 192.168.1.50 not responding, timed out and youre rebooting things again.

The part that kept me going in circles for so long is that it was never consistent. An audiobook would stream for hours without a hiccup, but then Sonarr would try to move a 4GB episode file and the whole mount would go down. I could ls the mount and browse around just fine even while Sonarr was hung. Small file operations worked. Large writes didnt. But not always, Sometimes a big import would go through without a problem, and Id convince myself whatever Id just changed in my mount options had fixed it.

I went through all the usual advice. Switched from NFSv4 to NFSv3, which I was especially convinced was the fix because the timing lined up with when Id been experimenting with v4. It wasnt. I toggled nolock, tuned rsize and wsize down from 128K to 32K, tried soft vs hard mounts, checked the Synologys HDD hibernation settings, disabled TCP offloading on the virtio NIC. Nothing actually fixed it. Every time I thought I had it, the next import that was over the threshold would fail and i would scream.

Then at one point I gave a couple of the VMs more RAM, thinking the media workloads could use the headroom. Everything got worse after that. Like, measurably worse. I didnt connect the two at the time.

What finally cracked it was running a dd test to write a 2GB file to the NFS mount and actually watching the numbers. With the 32K buffer mount options, the write reported 2.1 GB/s. On a gigabit link. Obviously that data is not going to the NAS. The kernel was eating the entire write into the VMs page cache, saying “yep, done!” and then trying to flush 2+ GB of dirty pages to the Synology all at once. The NAS gets hit with a wall of data it cant process fast enough, NFS RPC calls start timing out, and everything goes to hell.

The default value for vm.dirty_ratio is 20, meaning the kernel will let 20% of your RAM fill up with dirty pages before it forces a writeback. On my 13GB VM thats 2.6GB of buffered writes. So the kernel would happily sit there absorbing data into RAM, and then try to shove 2.6 gigs down a gigabit pipe to the NAS all at once. And when I “upgraded” VMs with more RAM, I was literally raising the ceiling on how big that buffer could get. Thats why things got worse. The inconsistency made sense too. A 700MB file might stay under the background flush threshold and trickle out fine. A 4GB season pack would blow past it and trigger the whole mess.

The fix

Two sysctl values:

sysctl -w vm.dirty_bytes=67108864
sysctl -w vm.dirty_background_bytes=33554432

This caps the dirty page buffer at 64MB and starts background writeback at 32MB. Instead of hoarding gigabytes and flushing all at once, the kernel now pushes data out to the NAS continuously in small batches. Make it persistent:

# For distros using /etc/sysctl.d/ (Debian 12+, Ubuntu, etc.)
echo -e 'vm.dirty_bytes=67108864\nvm.dirty_background_bytes=33554432' > /etc/sysctl.d/99-nfs-dirty-pages.conf
sysctl -p /etc/sysctl.d/99-nfs-dirty-pages.conf

# For distros using /etc/sysctl.conf
echo 'vm.dirty_bytes=67108864' >> /etc/sysctl.conf
echo 'vm.dirty_background_bytes=33554432' >> /etc/sysctl.conf

Before: 2GB dd writes at 101 MB/s, dies at the 2GB mark with NFS timeouts and I/O errors. After: same test, steady 11.4 MB/s start to finish, zero NFS timeouts, completes cleanly. OK oK Yeah, the throughput number is lower, but Ill take a transfer that actually finishes over one that crashes every time.

I applied this across all six of my VMs that mount the NAS and the whole fleet has been stable since. Theyd all been independently building up multi-gigabyte write backlogs and dumping them onto the Synology simultanously. I was basically DDoSing my own nas from six directions every time anything tried to write a big file.

Then I checked the Proxmox host itself. 128GB of RAM. Four NFS mounts to the same Synology, including the one Proxmox writes VM backups to. All hard mounts with default dirty ratio. Thats a 25GB dirty page ceiling on the hypervisor. Every scheduled backup was potentially building up a 25 gigabyte write buffer and then hosing the NAS with it in one shot. And because the mounts were hard, if the Synology choked during the flush, the hypervisor itself would hang, not just a VM. I dont even want to think about how many weird backup failures and unexplained freezes this was behind.

Since applying the fix Ive also noticed that Jellyfin library scans are completing reliably now. They used to hang constantly and Id just accepted that as normal Jellyfin-over-NFS jank. The scans were generating thumbnails and writing metadata, building up dirty pages, and triggering the same flush that would take down the mount mid-scan. Audiobookshelf was doing the same thing. It would scan libraries and randomly lose connection to the mounted paths. That one was harder to pin down because audiobook files and cover art are small enough that the writes wouldnt always push past the threshold on their own. But if another VM had already half-filled the NASs tolerance with its own flush, Audiobookshelf tipping it over would be enough. Same underlying bug in every case, and I spent months blaming three different applications for it.

If youre running a media stack on VMs with NFS mounts to a NAS and youve been tearing your hair out over random timeouts, check your vm.dirty_ratio and do the math against your RAM. Bet you its higher than you think.

TLDR; If your NFS mounts to a NAS randomly time out during large writes, your VMs are probably buffering gigabytes of dirty pages in RAM and then flushing them all at once, overwhelming the nas. Symptoms in my case were Jellyfin stopping mid-playback and hanging during library scans, Audiobookshelf losing connection to mounted paths and forgetting playback position, and Sonarr/Radarr locking up completely when trying to import episodes. Set vm.dirty_bytes=67108864 and vm.dirty_background_bytes=33554432 on every VM (and the hypervisor) to cap the buffer at 64MB and force continuous small writebacks instead.

  • happy_wheels@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 hours ago

    Based off what you wrote - and the fact that I’m massively sleep-deprived - it all makes sense. The issue you describe and the fix applied are akin to what we see in the database world, where users complain about queries being slow or unresponsive after trying to force-kill. Only for us to find out, that they submitted queries with a COMMIT after the whole 10mil record transaction, which clearly the DB can handle, but it will take a significant amount of time to rollback vs if the COMMITs are broken up and submitted more frequent. Basically chunking up the data into more manageable pieces as to not saturate the db threads, not to mention the underlying REDO and transaction log files too. So hope this was truly a long-term fix vs just a short-term one. Either way, great write up.( Also, you may want to invest in some 2.5gb networking for later, not that 1GbE isn’t enough, but just more pipeline is great, although I don’t know how much upgradeability your Synology will have in that department, so YMMV)