

Tbh arch is very resilient and easy to fix
Tbh arch is very resilient and easy to fix
No, knowing literally “systemctl enable --now” and “journalctl -ru” is not even learning. The level of knowledge of the OS needed for running a native package vs a container is exactly the same.
The obvious question: Do you want to access your server only from within your network or also from anywhere else?
Do a curl http://mydomain.tld/ -i
with your server off/while off-network.
Your registrar probably has a service to rewrite http accesses to https automatically. Curl -i shows the headers, which will probably confirm that you’re being redirected without even connecting to anything in your network.
Love how all “just works”-app debugging is just debugging the overly complicated and annoying container/-engine.
Arch packages. All services have systemd integration.
/var/run/postgresql is my eternal friend
Exactly. Therefore, docker is not useful for those purposes to me, as using arch packages (or similar) is easier to fulfill my needs.
One main server, with backup servers being very easy to get up and running, either by full-restoring the backup, or installing and restoring specific services. As everything’s backed up to a Hetzner Storage Box, I can always restore it (if I have my USB sticks with the keyfiles).
I don’t really see the need for multiple running hosts, apart from:
That I’ve yet to see a containerization engine that actually makes things easier, especially once a service does fail or needs any amount of customization. I’ve two main services in docker, piped and webodm, both because I don’t have the time (read: am too lazy) to write a PKGBUILD. Yet, docker steals more time than maintaining a PKGBUILD, with random crashes (undebuggable, as the docker command just hangs when I try to start one specific container), containers don’t start properly after being updated/restarted by watchtower, and debugging any problem with piped is a chore, as logging in docker is the most random thing imagineable. With systemd, it’s in journalctl, or in /var/log if explicitly specified or obviously useful (eg. in multi-host nginx setups). With docker, it could be a logfile on the host, on the guest, or stdout. Or nothing, because, why log after all, when everything “just works”? (Yes, that’s a problem created by container maintainers, but one you can’t escape using docker. Or rather, in the time you have, you could more easily properly(!) install it bare metal) Also, if you want to use unix sockets to more closely manage permissions and prevent roleplaying a DHCP and DNS server for ports (by remembering which ports are used by which of the 25 or so services), you’ll either need to customize the container, or just use/write a PKGBUILD or similar for bare metal stuff.
Also, I need to host a python2.7 django 2.x or so webapp (yes, I’m rewriting it), which I do in a Debian 13 VM with Debian 9 and Debian 9 LTS repos, as it most closely resembles the original environment, and is the largest security risk in my setups, while being a public website. So into qemu it goes.
And, as I mentioned, either stuff is officially packaged by Arch, is in the AUR or I put it into the AUR.
Considering I have a full backup, all services are Arch packages and all important data is on its own drive, I’m not concerned about anything
Well, what did you do exactly?
Not necessarily. With a normal consumer router, most likely, but there’s no spec or laws to have a firewall, technically.
And I think it’s obvious that I meant that, in contrast to stuff like DS-Lite for IPv4, your home modem/router is accessible by IPv6.
It can’t, as it’s only accessible in your local network, where your and your sister’s computers are in.
You need a public IPv4 and/or IPv6 address. Often, IPv6 addresses are already publicly accessible, but not supported by every device and ISP.
For .tgz files (compressed tar archives), you’ll need to decompress all the files into a single folder before importing. Then use the command immich-go upload from-google-photos /path/to/your/files.
Lucky. I need to use an external service for 12€/month with 100Mbps and 1TB/month limits, per VPN.
cut --help
and man cut
can teach you more than anyone here.
But: “|” takes the output of the former command, and uses it as input for the latter. So it’s like copying the output of “echo […]”, executing “cut -d ‘/’ -f 6”, and pasting it into that. Then copy the output of “cut”, execute “base64 -d” and paste it there. Except the pipe (“|”) automates that on one line.
And yes, cut takes a string (so a list of characters, for example the url), splits it at what -d specifies (eg. cut -d ‘/’ splits at “/”), so it now internally has a list of strings, “https:”, “”, “link.sfchronicle.com”, “external”, 41488169.38548", “aHR0cHM6Ly93d3cuaG90ZG9nYmlsbHMuY29tL2hhbWJ1cmdlci1tb2xkcy9idXJnZXItZG9nLW1vbGQ_c2lkPTY4MTNkMTljYzM0ZWJjZTE4NDA1ZGVjYSZzcz1QJnN0X3JpZD1udWxsJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV90ZXJtPWJyaWVmaW5nJnV0bV9jYW1wYWlnbj1zZmNfYml0ZWN1cmlvdXM” and “6813d19cc34ebce18405decaB7ef84e41”, and from that list outputs whatever is specified by -f (so eg. -f 6 means the 6th of those strings. And -f 2-3 means the 2nd to 3rd string. And -5 means everything up to and including the fifth, and 3- means everything after and including the third).
But all of that is explained better in the manpage (man cut). And the best way to learn is to just fuck around. So echo "t es t str i n g, 1" | cut ...
and try various arguments.
That’s called wakeonlan <MAC address>
Tbh I had no issues with synapse.
The problems that persist: Very rare issues with decrypting (as I rarely encounter it, while being in encrypted chats with 150+ users, it’s not an issue for me), apart from after you changed clients, slow image loading (a bit annoying, but ok if you multitask anyway) and clients all having different feature sets (some of which you can also hackily make work in others).
Tbh, if you’re using the same DB for PWs, you’ve successfully downgraded to 1FA now. Except maybe if you use a seperate KeyStick/Yubikey as secret bearer or smth