

Fire up the browser and watch the DNS logs, you’ll need to still allow update checks most likely.
Fire up the browser and watch the DNS logs, you’ll need to still allow update checks most likely.
Yeah there are plenty of advantages of a full system backup, like not having to worry that you’re backing up all the specific directories needed, and super easy restores since the whole bootable system is saved.
Personally I do both, I have a full system backup to local storage using Proxmox Backup Server, and then to Backblaze B2 using Restic I backup only the really important stuff.
I first decided to do a full-system backup in the hopes I could just restore it and immediately be up and running again. I’ve seen a lot of comments saying this is the wrong approach, although I haven’t seen anyone outline exactly why.
The main downside is the size of the backup, since you’re backing up the entire OS with cache files, log files, other junk, and so on. Otherwise it’s fine.
Then I started reading about backing up databases, and it seems you can’t just back up the data directory (or file in the case of SQLite) and call it good. You need to dump them first and backup the dumps.
You can back up the data directory, that works fine for selfhosted stuff generally because we don’t have tons of users writing to the database constantly.
If you back up /var/lib/docker/volumes
, your docker-compose.yaml
files for each service, and any other bind mount directories you use in the compose files, then restoring is as easy as pulling all the data back to the new system and running docker compose up -d
on each service.
I highly recommend Backrest which uses Restic for backups, very easy to configure and supports Healthchecks integration for easy notifications if backups fail for some reason.
If you exclusively use cloudflare tunnels you don’t need a proxy on your end unless you want to do split-horizon DNS for local access.
But otherwise, nginx, caddy, traefik, npm, etc… all work fine with Cloudflare. Personally I’m using Traefik and Caddy on my setups right now.
Also, a bit off-topic, but is Cloudflare’s proxy really needed? I heard it’s insecure to self host sites without Cloudflare because you’re exposing your ip address and leaving yourself vulnerable but is it really bad to self host without Cloudflare?
Up to you, cloudflare is a recent thing and hosting was done without it just fine before it came along. Personally I don’t use cloudflares proxy very much, I just use it mostly for DNS management.
I use backblaze b2 and https://github.com/garethgeorge/backrest
Backrest is by far the best restic manager I’ve found, easy webUI, with built in support for healthchecks.
Backrest (restic) is what I use after constant duplicati problems. Kopia is also a good option.
Duplicati is ok with tiny backup sets, but give it multiple TB of data and it chokes and constantly has errors requiring expensive rebuilds.
A lot of companies use Google mail anyways so your emails will be scanned regardless.
A series are great, much cheaper especially used, and have better materials (like plastic instead of glass backs), while having essentially the same hardware performance.
Crowdsec has default scenarios and lists that might block a lot of it, and you can pretty easily make a custom scenario to block IPs that cause large spikes of traffic to your applications if needed.
Basically everything. Self hosting doesn’t rely on public access.
Check the logs for the containers and see what the issue is first. Then go from there.
100% reliable so far, I’ve bought about 10 of them I think over the past 8 years or so. Some are in RAID 1 arrays, and some just on their own for backups and such.
The main thing is buy from a local shop or online place like serverpartdeals.com and not Amazon or other online marketplaces.
All my stuff is backed up several ways every night (which should be done no matter what drives are used) so it’s not that big of a deal if they failed suddenly.
That won’t migrate watch history
Ease of use mostly, one click to restore everything including the OS is nice. Can also easily move them to other hosts for HA or maintenance.
Not everything runs in docker too, so it’s extra useful for those VMs.
How do you handle backups? Install restic or whatever in every container and set it up? What about updates for the OS and docker images, watchtower on them I imagine?
It sounds like a ton of admin overhead for no real benefit to me.
A couple posts down explains it, docker completely steamrolls networking when you install it. https://forum.proxmox.com/threads/running-docker-on-the-proxmox-host-not-in-vm-ct.147580/
The other reason is if it’s on the host you can’t back it up using proxmox backup server with the rest of the VMs/CTs
Regardless of VM or LXC, I would only install docker once. There’s generally no need to create multiple docker VMs/LXCs on the same host. Unless you have a specific reason; like isolating outside traffic by creating a docker setup for only public services.
Backups are the same with VM or LXC on Proxmox.
The main advantages of LXC that I can think of:
Dockers ‘take-over-system’ style of network management will interfere with proxmox networking.
Ahh gotcha, selective sync or virtual file system are the common terms for that. Nextcloud supports it, Owncloud does too and I think Owncloud Infinite Scale does but it’s not 100% clear.
When you say Owncloud couldn’t keep files local without uploading, was that with VFS enabled on the client?
Mailbox.org is great, their webmail setup is good and has contacts and calendar and all the things you would expect to have. With Cal/CardDAV and ActiveSync support too.