Back in the day it was nice, apt get update && apt get upgrade and you were done.
But today every tool/service has it’s own way to being installed and updated:
- docker:latest
- docker:v1.2.3
- custom script
- git checkout v1.2.3
- same but with custom migration commands afterwards
- custom commands change from release to release
- expect to do update as a specific user
- update nginx config
- update own default config and service has dependencies on the config changes
- expect new versions of tools
- etc.
I selfhost around 20 services like PieFed, Mastodon, PeerTube, Paperless-ngx, Immich, open-webui, Grafana, etc. And all of them have some dependencies which need to be updated too.
And nowadays you can’t really keep running on an older version especially when it’s internet facing.
So anyway, what are your strategies how to keep sanity while keeping all your self hosted services up to date?
Unattended upgrades 11 months out of the year.
Very attended apt upgrades 2 weeks out of the year.
Arcane docker server checks for updates, notifies me when they’re available
for security relevant stuff I just get notifications of new github releases
# nix flake update # nixos-rebuild switchFine, I’ll be the low bar.
Proxmox, I just use the GUI to update
I use community-scripts almost exclusively. Community-scripts cron lxc updater does the heavy lifting.
pct enter [lxc]updatedoes a bunch of work too.
For Docker, I use a couple lxcs with Dockge on it, the “update” button takes me most of the rest of the way.
Finally, I have a couple remote machines [diet-pi]. I haven’t figured out updating over tailscale yet, so I just go round semi frequently for the
apt update && apt upgrade -yVMs get the
apt update && apt upgrade -ytoo. I keep a bare bones mint VM as a virtual laptop, as I don’t have one. I’ll do what I need to do and if I had to install software I’ll just nuke the VM and go again from the bare bones template.- use APT repositories when possible -> then
unattended-upgrades - For OCI images that do not provide tagged releases (looking at you searxng…), podman auto-update
- for everything else, subscribe to releases RSS feed, read release notes when they come out, check for breaking changes and possibly interesting stuff, update version in ansible playbook, deploy ansible playbook
- use APT repositories when possible -> then
I run most of my services in containers with Podman Quadlets. One of them is Forgejo on which I have repos for all my quadlet (systemd) files and use renovate to update the image tags. Renovate creates PRs and can also show you release notes for the image it wants you to update to.
I currently check the PRs manually as well as pulling the latest git commits on my server. But this could also be further automated to one’s liking.
Wow, that sounds like a nightmare. Here’s my workflow:
nix flake update nixos-rebuild switchThat gives me an atomic, rollbackable update of every service running on the machine.
I run NixOS. Go to the flake file and update channel version.
I do it manually. update the container version and docker pull and run
I have reduced the number of containers to ones i actually use, so it is manageable.
i use v2 instead of v2.1.0 docker container tags if the provider don’t make too many bleeding edge changes between updates
I just run watchtower in docker. It will watch all your other docker images and update them to latest version automatically if you want.
It works fine but with time, I stopped thinking i need to be on latest version all the time. It really isnt very important.
Just a few of my services are open on the internet, mainly caddy and wireguard.
Heads up that watchtower is no longer maintained. I haven’t yet looked into forks or alternatives.
Everything I run, I deploy and manage with ansible.
When I’m building out the role/playbook for a new service, I make sure to build in any special upgrade tasks it might have and tag them. When it’s time to run infrastructure-wide updates, I can run my single upgrade playbook and pull in the upgrade tasks for everything everywhere - new packages, container images, git releases, and all the service restart steps to load them.
It’s more work at the beginning to set the role/playbook up properly, but it makes maintaining everything so much nicer (which I think is vital to keep it all fun and manageable).
+1 for ansible.There’s a module for almost everything out there.
Yeah, For some reason I didn’t think of ansible even though I use it at work regularly. Thanks for pointing it out!
Just a word of caution…
I try to upgrade 1 (of a similar group) manually first to check it’a not foobarred after the update, then crack on with the rest. Testing a restore is 1 thing, but restoring the whole system…?
Renovate + GitOps. Check out https://github.com/onedr0p/cluster-template
If you don’t like Kubernetes, you can get a similar setup with doco-CD. Only limitation is that dococd can’t update itself, but you can use SOPS and Renovate all the same for the other services.
That or Komodo when using docker. Renovate is really good, you always know which version you’re at, you can set it up to auto merge on minor and/or patch level, it shows you the release notes etc.
This tutorial is good: https://nickcunningh.am/blog/how-to-automate-version-updates-for-your-self-hosted-docker-containers-with-gitea-renovate-and-komodo
I guess auto merge isn’t enabled, since there’s no way to check if an update doesn’t break your deployment beforehand, am I right?
You can configure automerge per stack and also if it’s allowed on patch, minor or major upgrades.
Yes, but usually when you use automerge you should have set up a CI to make sure new versions don’t break your software or deployment. How are you supposed to do that in a self-hosting environment?
Ideally, you have at least two systems, test updates in the dev system and only then allow it in prod. So no auto merge in prod in this case or somehow have it check if dev worked.
Seeing which services are usually fine to update without intervening and tuning your renovate config to it should be sufficient for homelab imho.
Given that most people are running :latest and just yolo the updates with watchtower or not automated at all, some granular control with renovate is already a big improvement.
Podman automatically updates my containers for me.
Because you point to :latest and everything is dockerized and on one machine? How does it know when it’s time to upgrade?
Yeah only for :latest containers, that’s true. It automatically runs a daily service to check whether there are newer images available. You can turn it off per container if you don’t want it.
One of the nice things about it is that I have containers running under several different users (for security reasons) so that saves me a lot of effort switching to all these users all the time.
It’s a bad practice to use latest tag
Depends on what you want to do. For production with sensitive data, yes it is. For my ytdl and jellyfin? Perfectly fine.
Depends. There are a few things I update by hand, but as long as you have proper backups it’s generally safer to run the latest versions of things automatically if you don’t mind the possibility of breakage (which is very rare in my experience). This is in the context of self-hosting of course, not a business environment.
Kubernetes + helm charts
I keep it simple, although reading down through the thread, there are some really nice and ingenious ways people accomplish about the same thing, which is totally awesome. I use a WatchTower fork and run it with
--run-once --cleanup. I do this when I feel comfortable that all the early adopters have done all the beta testing for me. Thanks early adopters. So, about 1 a month or so, I update 70 Docker containers. As far as OS updates, I usually hit those when they deploy. I’m running Ubuntu Jammy, so not a lot of breaking changes in updates. I don’t have public facing services, and I am the only user on my network, so I don’t really have to worry too much about that aspect.









