I recently noticed that htop displays a much lower ‘memory in use’ number than free -h, top, or fastfetch on my Ubuntu 25.04 server.
I am using ZFS on this server and I’ve read that ZFS will use a lot of RAM. I also read a forum where someone commented that htop doesn’t show caching used by the kernel but I’m not sure how to confirm ZFS is what’s causing the discrepancy.
I’m also running a bunch of docker containers and am concerned about stability since I don’t know what number I should be looking at. I either have a usable ~22GB of available memory left, ~4GB, or ~1GB depending on what tool I’m using. Is htop the better metric to use when my concern is available memory for new docker containers or are the other tools better?
Server Memory Usage:
- htop =
8.35G / 30.6G - free -h =
total used free shared buff/cache available
Mem: 30Gi 26Gi 1.3Gi 730Mi 4.2Gi 4.0Gi
- top =
MiB Mem : 31317.8 total, 1241.8 free, 27297.2 used, 4355.9 buff/cache - fastfetch =
26.54GiB / 30.6GiB
EDIT:
tldr: all the tools are showing correct numbers. Htop seems to be ignoring ZFS cache. For the purposes of ensuring there is enough RAM for more docker containers in the future, htop seems to be the tool that shows the most useful number with my setup.


You actually WANT to be with low free memory. Provided that most of it is used by cache.
Free memory is a waste, when you could cache stuff for faster access.
That’s how Linux memory management works, and it make sense if you relflect on it. Better cache that page or that file that is used often, since free memory is just wasted. Cache can be freed and memory reclaimed in a fraction of a millisecond when needed.
So don’t bother too much. Unless your SWAP usage is high, don’t bother.
Also consider that Linux kernel will use your swap a bit even if you have lots of cache, because the kernel knows better than you how to improve your performances. Swapping out never used stuff is better than killing cached items.
Again, don’t oberthink memory on Linux, the best alarm is when swap is constantly happening, then yes you need more ram (or to kill that broken process that keeps hogging due to a bug)
This is why I’d like to know what tool shows the most useful number. If I only have 4GB out of 30GB left, is that 26GB difference mostly important processes or mostly closable cache? Like, is htop borked and not showing me useful info or is it saying 8GB of the 26GB used is important showstopping stuff?
The most useful is probably
cat /proc/meminfo. The first couple of lines tell you everything you need to know.MemTotalis the total useful memory.MemFreeis how much memory is not used by anything.Cachedis memory used by various caches,e.g. ZFS. This memory can be reallocated.MemAvailableis how much memory can be allocated, i.e.MemFree + Cached.You’re an angel.
I don’t know what the fuck htop is doing showing 8GB in useBased on another user comment in this thread, htop is showing a misleading number. For anyone else who comes across this, this is what I have.This makes the situation seem a little more grim.I have ~2GB free, ~28GB in use, and of that ~28GB only ~3GB is cache that can be closed. For reference, I’m using ZFS and roughly 27 docker containers.It doesn’t seem like there is much room for future services to selfhost.MemTotal: 30.5838 GB MemFree: 1.85291 GB MemAvailable: 4.63831 GB Buffers: 0.00760269 GB Cached: 3.05407 GBYou should also look at which processes use the largest amount of memory. ZFS is weird and might allocate its cache memory as “used” instead of “cached”. See here to set its limits: https://forum.proxmox.com/threads/limit-zfs-memory.140803/
Assuming the info in this link is correct, ZFS is using ~20GB for caching which makes htop’s ~8GB of in use memory make sense when compared with the results from
cat /proc/meminfo. This is great news.My results after running
cat /proc/spl/kstat/zfs/arcstats:c 4 19268150979 c_min 4 1026222848 c_max 4 31765389312 size 4 19251112856