Would this mini pc be a good homeserver
For what purpose?
Would this mini pc be a good homeserver
For what purpose?
Encrypting the connection is good, it means that no one should be able capture the data and read it - but my concern is more about the holes in the network boundary you have to create to establish the connection.
My point of view is, that’s not something you want happening automatically, unless you manually configured it to do that yourself and you know exactly how it works, what it connects to and how it authenticates (and preferably have some kind of inbound/outbound traffic monitoring for that connection).
Ah, just one question - is your current Syncthing use internal to your home network, or does it sync remotely?
Because if you’re just having your mobile devices sync files when they get on your home wifi, it’s reasonably safe for that to be fire-and-forget, but if you’re syncing from public networks into private that really should require some more specific configuration and active control.
My main reasons are sailing the high seas
If this is the goal, then you need to concern yourself with your network first and the computer/server second. You need as much operational control over your home network as you can manage, you need to put this traffic in a separate tunnel from all of your normal network traffic and have it pop up on the public network from a different location. You need to own the modem that links you to your provider’s network, and the router that is the entry/exit point for your network. You need to segregate the thing doing the sailing on its own network segment that doesn’t have direct access to any of your other devices. You can not use the combo modem/router gateway device provided by your ISP. You need to plan your internal network intentionally and understand how, when, and why each device transmits on the network. You should understand your firewall configuration (on your network boundary, not on your PC). You should also get PiHole up and running and start dropping unwanted inbound and outbound traffic.
OpSec first.
For individual projects the way this usually works is one of the larger companies that rely on the project hires the developer as an employee to maintain the codebase full-time and help integrate it with their internal processes.
Larger projects might form their own company and sell integration & support to other companies (e.g. Red Hat, Bitwarden).
Otherwise you’re basically dependent on donations or government grants.
There’s a Wikipedia article on this subject: Business models for open-source software
And there’s various industry opinions:
Demystifying the Open Source Business Model: A Comprehensive Explanation
How to build a successful business model around open source software
Open Source Business Models (UNICEF course)
I think monetization is easier for user-facing software though, which a lot of this material is written around, and harder for projects like libraries.
Is the entire neighborhood a cornfield?
That seems… impractical.
VPNs as a technology might not be illegal but circumventing the firewall certainly is.
Unless you are very vocal and high profile person no one will black bag you in a country of billion people, lol.
This is a bit of a misunderstanding about how things work in an authoritarian system. Sure, you might fly under the radar for awhile, but if you call attention to yourself (say, by getting caught trying to bypass the government firewall) and you are not high-profile, then it is very low-effort to make you disappear. Few will notice, and those that do will stay silent out of fear.
If you are more high-profile you still get black-bagged, you just get released after, with your behavior suitably modified.
Naomi Wu no longer uploads to YouTube.
Depends - how many family members do you have that the PRC might use against you? or who would miss you if the PRC black bagged you?
And there are hundreds if not thousands of them, plus a lot of automated tooling.
Beyond your eventual technical solution, keep this in mind: untested backups don’t exist.
I recommend reading some documentation about industry-leading solutions like Veeam… you won’t be able to reproduce all of the enterprise-level functionality, at least not without spending a lot of money, but you can try to reproduce the basic practices of good backup systems.
Whatever system you implement, draft a testing plan. A simpler backup solution that you can test and validate will be worth more than something complex and highly detailed.
I mean… exposed to each other, sure, but they’re all exposed to Syncthing and the public relays.
It is a fantastic idea to start your home server project on some e-waste hardware, and use it until you know specifically what features you’re lacking that you would need better hardware for.
Er, wait, are you using Syncthing for its intended purpose of syncing files across devices on your local network? And then exposing that infrastructure to the internet? Or are you isolating Syncthing instances?
The issue is more that trying to upgrade everything at the same time is a recipe for disaster and a troubleshooting nightmare. Once you have a few interdependent services/VMs/containers/environments/hosts running, what you want to do is upgrade them separately, one at a time, then restart that service and anything that connects to it and make sure everything still works, then move on to updating the next thing.
If you do this shotgun approach for the sake of expediency, what happens is something halfway through the stack of upgrades breaks connectivity with something else, and then you have to go digging through the logs trying to figure out which piece needs a rollback.
Even more fun if two things in the same environment have conflicting dependencies, and one of them upgrades and installs its new dependency version and breaks whatever manual fix you did to get them to play nice together before, and good luck remembering what you did to fix it in that one environment six months ago.
It’s not FUD, it’s experience.
This is also a great way to just break everything you’ve set up.
and you have to choose to boot into desktop mode to even mess with anything.
You say that like it’s a bad thing, but I think having the two separate modes is a fantastic setup. You get basically a console experience, smooth and straightforward and easy to use for just playing games, and you still have access to the underlying system anytime you want.
I recommend getting familiar with SMART and understanding what the various attributes mean and how they affect a drive’s performance and reliability. You may need to install smartmontools to interact with SMART, though some Linux distributions include this by default.
Some problems reported by SMART are not a big deal at low rates (like Soft Read Errors) but enterprise organizations will replace them anyway. Sometimes drives are simply replaced at a certain number of Power-On Hours, regardless of condition. Some problems are survivable if they’re static, like Uncorrectable Sector Count - every drive has some overhead of extra sectors for internal redundancy, so one bad sector isn’t a big deal , but if the number is increasing over time then you have a problem and should replace the drive immediately.
Also keep in mind, hard drives are consumables. Mirroring and failovers are a must if your data is important. New drives fail too. There’s nothing wrong with buying used if you’re comfortable with drive’s condition.
Until your WiFi driver stops getting updates.
ref: Linux’s Sole Wireless/WiFi Driver Maintainer Is Stepping Down
Arch often seems to ignore the fundamental rule:
Linus is in the right. Arch developers are frequently in the wrong.