• 1 Post
  • 121 Comments
Joined 3 months ago
cake
Cake day: June 9th, 2024

help-circle
  • Well, a fault isn’t just an outage.

    You said the other person involved isn’t technical: what if say, a database corrupts itself and you’re on vacation for a week.

    Is the expectation that you’ll always be available all the time to fix technical problems?

    And, as a failure state: what happens if you simply cannot be reached for that week no matter what. What’s the failover plan for the rest of the people involved in the business?


  • I’m going to being contrarian, as is my bit.

    I self-host everything and fully believe everyone else should too.

    HOWEVER, if your self hosted shit breaks for say, 3 days, how much money is this going to cost you?

    For business stuff you really really should determine what your backup plan for ‘Oops shit’s dead’ is well before shit’s dead, and honestly, in some cases, maybe it makes more sense not to host everything and have a couple of things that would wreck your business provided by a SaaS company that has a SLA, and on-call engineers, and all that good shit.

    Just a thought to keep in mind, I suppose.





  • I made this mistake and hosted my mom’s webpage and email.

    Anytime anything happened, she was on the phone to me complaining about how horrible it all was.

    Email bounced because she got the address wrong? My fault. All the spam she got? My fault. Images were the wrong size on her webpage? My fault. Typo in a PDF she was sending to a client? My email server must have messed it up.

    I could continue, but jesus christ, it was a disaster.

    Never, ever, ever, ever host for family members unless you’re willing to put up with that kind of shit, because that’s what always happens.



  • The amusing thing here is that I forgot all about Tuxedo and System 76.

    I would suspect that might be exactly the problem: as far as I know, neither of them advertise at all, or if they do, it’s something that’s completely forgettable and somewhere that someone who’s not deeply involved in Linux is ever going to see it.

    You’re right that they have the most incentive since they actually sell something you could (theoretically) want to buy, and are probably not living on large enterprise contracts since I don’t think I’ve ever seen hardware from either in the wild.


  • 100% agree. Most Linux users and companies are trying to sell on a list of things Linux doesn’t do, or technical features which pretty much absolutely nobody gives a crap about who isn’t a tech nerd in the first place.

    I was actually thinking of Apple’s I’m a Mac/I’m a PC ads as something that could actually probably work, because at this point Windows has shittified itself to the point that even non-technical people I know IRL grumble about it. (I tell them to buy a Mac because I’m absolutely not about to become level 1 Linux desktop support.)

    But again, who pays for it, and why? I don’t think there’s ANY financial incentive for consumer marketing from anyone who makes a distro that can afford an actual ad, because none of them are structured to give a shit about consumer use: it’s all enterprise support contracts, and if someone happens to use it on their desktop, cool but not their actual business.







  • Well, in 2009 it was what? like .5% of all desktops or something? Can’t really go down from there.

    I don’t disagree the trend is up, I mostly disagree that the number provided is accurate and is likely wildly wrong: it’s possible it’s wildly low, but I really don’t think so.

    Anecdata: I know more people in my circles that have switched from running Linux on their computers than to running Linux. Almost 100% of the switchers moved to Mwhatever Macbooks because they got tired of dealing with the shit that is x86 laptop hardware, and Linux use was the casualty of shitty hardware.


  • The problem is I don’t think I believe those numbers represent actual desktop use as an exclusive desktop use platform.

    They’re just ‘someone visited a website with a linux user agent’, which could mean an awful lot of things ranging from someone doing automated scraping with a headless chrome, to an actual user, to someone just plain lying about what OS they’re using in order to break fingerprinting.

    The number goes up and down WAY too much percentage-wise between months for it to be a really good measure of how much linux on the desktop there actually is, as much as I’d like it to be true :/


  • Comedy NNTP option here.

    It’s an established, stable, understood and very very thoroughly debugged and tested protocol/server solution that’ll run on a potato and has clients for every OS you’ve ever heard of, and a bunch you haven’t.

    Setting up your own little mini-network and sharing groups is fairly trivial and it’ll happily shove copies of everyone’s data to every server that’s on the feed.

    Just encrypt your shit, post it, and let the software do the rest.

    (I mean, if it’s good enough to move 200TB of perfectly legitimate Linux ISOs a day, it’ll handle however much data you could possibly be backing up.)

    Disclaimer: it’s not quite that simple, but I mean, it’s pretty close to. Also I’m very much a UNIX boomer and am a big fan of the simplest solution that’s got the longest tested history over shiny new shit, so just making that bias clear.


  • Little bit of A, little bit of B.

    I probably go through at least one full discharge cycle a month, if not more because the power around here suuucks. (The NAS goes down, but I leave the network gear up until the UPS dies, because fuck it, why not.)

    It’s also a ~10 year old UPS that likes to eat a $25 battery every 18 or so months so I just haven’t really had any justification to replace the whole thing yet since there’s an awful lot of $25 batteries in a new UPS.


  • I replace the batteries in my UPS every 18 months, and don’t try to outlast power outages.

    I have everything configured to shut down if the power goes down and stays down more than 5 minutes, which is ~20% of the maximum rated runtime. (I’m using repurposed desktop hardware that loves it’s watts as a home server.)

    I picked the low number for the reasons you’ve outlined: even if the battery is severely degraded, it’s probably not THAT severely degraded and it’s a safe time span to ride out short hiccups, but still well under the runtime limits so that a safe shutdown can happen.

    That and I’ve noticed that, typically, if the power is down for 5 minutes it’s going to be down for way longer than 5 minutes, so it doesn’t matter and I’m not going to have enough batteries to outlast the outage.


  • I just went with a plain boring Ubuntu box, because all the “purpose built” options come with compromises.

    Granted, this is about as hard-mode as this can get, but on the other hand I have 100% perfect support for any damn thing I feel like using, regardless of the state of support of whatever more specialized OS is for aforementioned thing.

    I probably wouldn’t recommend this if you’re NOT very well versed in Linux sysadmin stuff, and probably wouldn’t recommended it to anyone who doesn’t have any interest in sometimes having to fix a broken thing, but I’m 3 LTS upgrades, two hardware swaps, and a full drive replacement, and most of a decade into this build and it does exactly what I want, 95% of the time.

    I would say, though, that containerizing EVERYTHING is the way to go. Worst case, you blow up a single service and the other dozen (or two, or three…) keep right on running like you did absolutely nothing. I can’t imagine maintaining the 70ish containers I’m running without them actually being containers and/or without me being a complete nutcase that runs around the house half naked muttering about the horrors of updates.

    I’m not anti-Cloudflare, so I use a mix of tunnels, their normal proxy, as well as some rawdogging of services with direct port forwards/a local nginx reverse proxy.

    Different services, different needs, different access methods.