

I’d not recommend Synology anymore, as they’re starting to implement vendor lock on their drives and NAS boxes. As in, you’ll have to use their drives for the Nas to work.
I’d not recommend Synology anymore, as they’re starting to implement vendor lock on their drives and NAS boxes. As in, you’ll have to use their drives for the Nas to work.
Yeah, I think I got it wrong, I thought about /usr, but it can be setup on a separate FS as well.
I believe that the only FS that absolutely need to be on the root partition are /etc and /var. The rest can be anywhere else with various degrees of tinkering. For /home to be moved, you should just need to edit your fstab (or your systemd mounts, depending on your distro).
I really love all the 5+ years old articles about why systemd sucks.
It’s not perfect but it’s so much better than the plethora of different init methods Linux used to have. Also managing sysv init scripts sucked really bad.
It’s lightweight, most of it is optional, it’s declarative, it makes managing your systems much easier and it just works.
It has to do that none of the American apps are being banned for the exact same reason. Especially when we know for certain that they are (at the very least) as bad as that one Chinese app. It’s a funny double standard.
That’s just a doc, kexec is also available on Fedora, Debian, Centos, etc.
Not necessarily, you can use kexec
And on some distros you can also just reload the kernel without rebooting
To provision VMs yes, to configure them I think Ansible works best. But you can call Ansible from Terraform.
You can use udev rules and systemd mount or AutoFs.
https://wiki.archlinux.org/title/Udev
https://wiki.archlinux.org/title/Systemd#systemd.mount_-_mounting
It was definitely a headache for me as well, but you need a guest agent (like vmwaretools or qemu-guest-agent), a cloud init ready template for the distro of your choice, a cloud init config file (network/user/vendor) and a custom SCSI/ide cloudinit cdrom mounted at boot on your VM. You also can find cloudinit logs on your VM to try and figure out what’s missing or what went wrong.
If you buy three of them you can set up a Ceph cluster I suppose ahah. That would solve part of your issue of having storage and compute on the same node.
If you don’t need enterprise level hardware and support, I can suggest MinisForum. They released the MS01 fairly recently and I believe it fits your specs.
That’s the problem, if anyone somehow gets your root CA key, your encryption is pretty much gone and they can sign whatever they want with your CA.
It’s a lot of work to make sure it’s safe in a home setup.
I’m talking about home hosting and private keys. Not businesses with people whose full time job is to make sure everything runs fine.
I’m a nobody and I regularly have people/bots testing my router. I’m not monitoring my whole setup yet and if someone gets in I would probably not notice until it’s too late.
So hosting my own CA is a hassle and a security risk I’m not willing to put work into.
The domain certificate is public and its key is private? That’s basically it, if anyone gets access to your key, they can sign with your name and generate certificates without your knowledge. That’s my opinion and the main reason why I wouldn’t have a self hosted CA, maybe I’m wrong or misled, but it’s a lot of work to ensure everything is safe, only for a self hosted setup.
For self hosting at least, having your own CA is a pain in the ass to make sure everything is safe and that nobody except you has access to your CA root key.
I’m not saying it’s not doable, but it’s definitely a lot of work and potentially a big security risk if you’re not 100% certain of what you’re doing.
That sounds like a bad idea, you would need your CA and your root certs to be completely air gapped for it to be even remotely safe.
Yeah, it’s started to roll out on their new hardware:
https://www.theverge.com/news/652364/synology-nas-third-party-hard-drive-restrictions