But OS X, macOS, and at least one Linux distro are/were UNIX certified.
But OS X, macOS, and at least one Linux distro are/were UNIX certified.
IIRC Torvalds uses Fedora.
(Debian for me.)
Remote backup server would be my suggestion.
Configure it with a VPN to talk to your home network and set it up at a trusted friend’s or family’s place.
I do this with a raspberry pi and an external HDD that takes daily/weekly/monthly snapshots, with daily rsync. Works nicely for me.
I’m guessing it’s because the developers either have a different speciality that they focus on, are employed to support specific hardware, or both.
Duh, just read it back from /dev/random
You will recover the data, you just need to wait long enough.
At 28 years old, it’s safe to say Leo doesn’t use KDE.
Happy birthday!
That’s how I started using Linux — big book with CD, I think it was “RedHat Linux Secrets 5.4” or something. 2.0 or 2.2 kernel.
Honestly, it was fantastic. And almost all of it is still relevant today. (Some of the stuff on xfree86 and the chap/pap stuff not so much.)
But it gave a really solid (IMHO) intro to a Linux/*NIX system, a solid overview of coreutils, etc. And while LILO has been long replaced, and afaik /sys
didn’t exist at the time, it formed a good foundation.
I’ll refrain from commenting on any init system changes that have taken place since then.
It’s mostly so that I can have SSL handled by nginx (and not per-service), and also for ease of hosting multiple services accessible via subdomains. So every service is its own subdomain.
Additionally, my internal network (as in, my physical LAN) does not have any port forwarding enabled — everything is over WireGuard to my VPS.
My method:
VPS with reverse proxy to my public facing services. This holds SSL certs, and communicates with home network through WireGuard link configured on my router.
Local computer with reverse proxy for all services. This also has SSL certs, and handles the same services as the VPS, so I can have local/LAN speeds. Additionally, it serves as a reverse proxy for all my private services, such as my router/switches/access point config pages, Jellyfin, etc.
No complaints, it mostly just works. I also have my router override DNS entries for my FQDN to resolve locally, so I use the same URL for accessing public services on my LAN.
As a long-time Debian user, I’d have to throw my vote behind Slackware for the title of most UNIX-y, which is I guess a bit different from most Linux-y.
Debian got me through grad school, but Slack got me through undergrad on a hopelessly underpowered old ThinkPad — Volkerding is a legend, and Slack will always be dear to my heart.
This happened to me when Debian switched from SysV to systemd. I am not the only person who experienced this (e.g., https://bbs.archlinux.org/viewtopic.php?id=147478 ).
This is not to say the systemd behavior is wrong, but it essentially changed the behavior of fstab
. Whether this is Debian’s fault, Arch’s fault (per the above link), systemd’s fault, or my fault is a fair question. But this committed that most egregious of sins per our Lord and Savior Torvalds — it broke my userspace.
My favorite was when the behavior of a USB drive in /etc/fstab
went from “hmm it’s not plugged in at boot, I’ll let the user know” to “not plugged in? Abort! Abort! We can’t boot!”
This change over previous init behavior was especially fun on headless machines…
Getting TLS certs will be complicated
I just use Let’s Encrypt with a wildcard domain — same certs for public and private facing domains. I’m sure this isn’t best practice, but it’s mostly just for me so I’m not too worried :)
Yeah I don’t expose Jellyfin over the Internet, so it doesn’t matter for me, and wouldn’t work at all over WAN (unless VPN’d to home network).
Also, it’s all reverse proxied, and there’s nothing preventing having two Jellyfin hostnames, e.g., jf-local.mydomain.com and jf-public.mydomain.com.
Another fun trick you can play is to use a private IP on your public DNS records. This is useful for Jellyfin on Chromecast for instance — it uses 8.8.8.8 for DNS lookup (and ignores your router settings), so it wants a fully qualified domain name. But it has no problem accessing local hosts, so long as it’s from 8.8.8.8’s record.
I have set up local DNS entries (with Pi-Hole) to point to my srrver, but I don’t know if it possible to get certs for that, since it is not a real domain.
So long as your certs are for your fully qualified domain there’s no problem. I do this, as do many people — mydoman.com is fully qualified, but on my own network I override the DNS to the local address. Not a problem at all — DNS is tied to the hostname, not the IP.
The only flaw in Corel’s logic was that as soon as you’re running Linux, you lose all desire to run WordPerfect, and develop an irresistible need to align yourself with vim or emacs…
Ended up with the Yaesu FT710, with a G5RV Jr. in the attic. Internal tuner tunes 40-6 with the exception of 15m and 17m. Very pleased with it so far! Several digital DX so far (Australia, Brazil, Samoa, Japan, Alaska, Hawaii — I’m at CM87/California).
To-do list includes low loss coax (100ft run of who-knows-what currently); debug intermittent Ethernet issues (Ethernet runs parallel to feedline — choke balun/better choking of feedline?); possibly get remote tuner (one step at a time…). Fun stuff!
EulerOS, a Linux distro, was certified UNIX.