

I’ve always wondered - and figured here is a good a place to ask as anywhere else - what’s the advantage of object storage vs just keeping your data on a normal filesystem?
I’ve always wondered - and figured here is a good a place to ask as anywhere else - what’s the advantage of object storage vs just keeping your data on a normal filesystem?
How about XPipe?
It can even auto-configure itself by parsing out your ~/.ssh/config so you can keep everything defined there for easy CLI access but also use the GUI when desired.
Just FYI - you’re going to spend far, FAR more time and effort reading release notes and manually upgrading containers than you will letting them run :latest and auto-update and fixing the occasional thing when it breaks. Like, it’s not even remotely close.
Pinning major versions for certain containers that need specific versions makes sense, or containers that regularly have breaking changes that require you to take steps to upgrade, or absolute mission-critical services that can’t handle a little downtime with a failed update a couple times a decade, but for everything else it’s a waste of time.
How about Dawarich?
https://github.com/Freika/dawarich
I haven’t used it myself, but I have it in the backlog of things to try out
I’ve never understood this. You guys know you can have multiple Firefox windows, right? What’s the point of tab groups when you can just group related tabs in a different window? Between multiple workspaces, multiple monitors, and multiple browser windows, I never feel the need to have more than 5-10 tabs open on any one of them at a time. More than that and I’m clearly doing something wrong and need to clean up anyway.
They likely streamed from some other Plex server in the past, and that’s why they’re getting the email. The email specifically states that if the server owner has a plex pass, you don’t need one.
I got the email earlier today and it couldn’t be clearer:
As a server owner, if you elect to upgrade to a Plex Pass, anyone with access to your server can continue streaming your server content remotely as part of your subscription benefits.
I run all of my Docker containers in a VM (well, 4 different VMs, split according to network/firewall needs of the containers it runs). That VM is given about double the RAM needed for everything it runs, and enough cores that it never (or very, very rarely) is clipped. I then allow the containers to use whatever they need, unrestricted, while monitoring the overall resource utilization of the VM itself (cAdvisor + node_exporter + Promethus + Grafana + Alert Manager). If I find that the VM is creeping up on its load or memory limits, I’ll investigate which container is driving the usage and then either bump the VM limits up or address the service itself and modify its settings to drop back down.
Theoretically I could implement per-container resource limits, but I’ve never found the need. I have heard some people complain about some containers leaking memory and creeping up over time, but I have an automated backup script which stops all containers and rsyncs their mapped volumes to an incremental backup system every night, so none of my containers stay running for longer than 24 hours continuous anyway.
People always say to let the system manage memory and don’t interfere with it as it’ll always make the best decisions, but personally, on my systems, whenever it starts to move significant data into swap the system starts getting laggy, jittery, and slow to respond. Every time I try to use a system that’s been sitting idle for a bit and it feels sluggish, I go check the stats and find that, sure enough, it’s decided to move some of its memory into swap, and responsiveness doesn’t pick up until I manually empty the swap so it’s operating fully out of RAM again.
So, with that in mind, I always give systems plenty of RAM to work with and set vm.swappiness=0. Whenever I forget to do that, I will inevitably find the system is running sluggishly at some point, see that a bunch of data is sitting in swap for some reason, clear it out, set vm.swappiness=0, and then it never happens again. Other people will probably recommend differently, but that’s been my experience after ~25 years of using Linux daily.
I self-host Bitwarden, hidden behind my firewall and only accessible through a VPN. It’s perfect for me. If you’re going to expose your password manager to the internet, you might as well just use the official cloud version IMO since they’ll likely be better at monitoring logs than you will. But if you hide it behind a VPN, self-hosting can add an additional layer of security that you don’t get with the official cloud-hosted version.
Downtime isn’t an issue as clients will just cache the database. Unless your server goes down for days at a time you’ll never even notice, and even then it’ll only be an issue if you try to create or modify an entry while the server is down. Just make sure you make and maintain good backups. Every night I stop and rsync all containers (including Bitwarden) to a daily incremental backup server, as well as making nightly snapshots of the VM it lives in. I also periodically make encrypted exports of my Bitwarden vault which are synced to all devices - those are useful because they can be natively imported into KeePassXC, allowing you to access your password vault from any machine even if your entire infrastructure goes down. Note that even if you go with the cloud-hosted version, you should still be making these encrypted exports to protect against vault corruption, deletion, etc.
Something I haven’t seen mentioned yet - if you’re in the US, lock down your credit at all 3 agencies. It takes 10-15 minutes and is free, it’s easy to do.
The issue is that many of these leaks include things like your your full legal name, phone number, parents’ full legal names, your social security number, and your entire address history. This makes it trivially easy for somebody to steal your identity and start opening up credit accounts in your name. You need to lock down your credit before that happens. If you need your credit run in the future (opening a bank account, getting a credit card or loan), just ask them which agency they pull the report from and temporarily unfreeze it so they can run the report, then re-freeze it when they’re done. It adds 5 minutes of work once or twice a decade, but could be priceless later on when someone tries to steal your identity.
2-4G for swap (more if you want to hibernate), the rest for /. Only add a boot/EFI partition if needed.
Over-partitioning is a newbie mistake IMO, it usually causes way more problems than it solves.
I don’t like the fact that I could delete every copy using only the mouse and keyboard from my main PC. I want something that can’t be ransomwared and that I can’t screw up once created.
Lots of ways to get around that without having to go the route of burning a hundred blu-rays with complicated (and risky) archive splitting and merging. Just a handful of external HDDs that you “zfs send” to and cycle on some regular schedule would handle that. So buy 3 drives, backup your data to all 3 of them, then unplug 2 and put them somewhere safe (desk at work, friend or family member’s house, etc.). Continue backing up to the one you keep local for the next ~month and then rotate the drives. So at any given time you have a on-site copy that’s up-to-date, and two off-site copies that are no more than 1 and 2 months old respectively. Immune to ransomware, accidental deletion, fire, flood, etc. and super easy to maintain and restore from.
Main reason is that if you don’t already have the right key, VPN doesn’t even respond, it’s just a black hole where all packets get dropped. SSH on the other hand will respond whether or not you have a password or a key, which lets the attacker know that there’s something there listening.
That’s not to say SSH is insecure, I think it’s fine to expose once you take some basic steps to lock it down, just answering the question.
Some people move the port to a nonstandard one, but that only helps with automated scanners not determined attackers.
While true, cleaning up your logs such that you can actually see a determined attacker rather than it just getting buried in the noise is still worthwhile.
Reverse proxy + DNS-challenge wildcard cert for your domain. The end. Super easy to set up and zero maintenance. Adding a new service is just a couple clicks in your reverse proxy and you’re done.
Does ZFS allow for easy snapshotting like btrfs?
Absolutely
edit a filename while the file is open
Any Linux filesystem will do that
Same, I don’t let Docker manage volumes for anything. If I need it to be persistent I bind mount it to a subdirectory of the container itself. It makes backups so much easier as well since you can just stop all containers, backup everything in ~/docker or wherever you put all of your compose files and volumes, and then restart them all.
It also means you can go hog wild with docker system prune -af --volumes
and there’s no risk of losing any of your data.
Mint is basically Ubuntu with all of Canonical’s BS removed. This definitely counts as Canonical BS, so I’d be surprised if it made its way into Mint.
I would separate the media and the Jellyfin image into different pools. Media would be a normal ZFS pool full of media files that gets mounted into any VM that needs it, like Jellyfin, sonarr, radarr, qbittorrent, etc. (preferably read-only mounted in Jellyfin if you’re going to expose Jellyfin to the internet).
That’s exactly what it does. I got the prompt on my system, I said no, and it said ok and everything proceeded on like normal.