

Oh - I like that!


Oh - I like that!


You haven’t said what errors you’re seeing so it’s difficult for anyone to provide any help…
But I highly doubt rsync is the issue.


Seems unlikely. It would probably “not work at all” if a protocol mismatch was an issue. I’m willing to bet raync can fall back to older protocols.


What do you think a “lighter” distro looks like? Just uninstall stuff you don’t want.


I think you choose a poor example.
When I say long name I wasn’t implying meaningless ones.
Sooo, that example wasn’t exactly “contrived” - it’s based on a standard I see where I work.
DB - it's a database!
DW - and a data warehouse at that!
ORCL - It's an Oracle database!
HHI - Application or team using / managing this database
P - Production (T for Test - love the 1 char difference between names!)
01 - There may be more than one.
This is more what I’m arguing against - embedding meta-data about the thing into its name. Especially when all of that information is available in AWS metadata.
[Site][service][Rack] makes sense for on-premise stuff - no argument there.
I’m just saying long names dont have to be obtuse or confusing.
Agree


So is using Atrix.


In a business with tens of thousands of servers, it makes sense to have long complicated names.
I’m actually not convinced of this approach. It’s one of those things that makes perfect logical sense when you say it - but in practice “DBDWWHORCLHHIP01” is just as meaningless as “Hercules”. And it’s a lot more difficult to say, remember and differentiate from “DBDWWHORCLHHID01”. You may as well just use UUIDs at that point.
Humans are really good at associating names with things. It’s why people have names. We don’t call people “AMCAM601W” for a reason. Even in conversations you don’t rattle off the long initialism names of systems - you say “The <product> database”.


God I hate the “stuff as much information into a server name as you can with no separators in all caps” naming conventions…


I get that - it’s difficult to see the point in it until you’ve gone along without it. Especially as a beginner since you don’t have a strong sense of what problems you will encounter and how these tools solve those problems.
At some point the learning curve for IaaC becomes worth the investment. I actually pushed off learning k8s myself for some time because it was “too complicated” and docker-compose worked just fine for me. And now that I’ve spent time learning it I converted over very quickly and wouldn’t go back… It’s much easier to maintain, monitor and setup new services now.
Depending on your environment something like Ansible might be a good place to start. You can begin even with just a simple playbook that does an “apt update && apt upgrade” on all your systems. And then start using it to push out standard configurations, install software, create users, etc. The benefit pays off in time. For example - recently (yesterday) I was able to install Apache Alloy on a half-dozen systems trivially because I have a set of Ansible scripts that manage my systems. Literally took 10 mins. All servers have the app installed, running, and using the same configuration. And I can modify that configuration across all those systems just as easily. It’s very powerful and reduces “drift” where some systems are configured incorrectly and over time you forget “which one the correct one?” For me the “correct one” is the one in source control.


The fun thing about infrastructure as code is that the terraform, ansible and k8s manifests are documentation.
I only really need to document some bootstrap things in case of emergency and maybe some “architectural” things. I use joplin for that (and many other things).


deleted by creator


Y’all are assuming the security issue is something exploitable without authentication or has something to do with auth.
But it it could be a supply chain issue which a VPN won’t protect you from.


Well, yeah - that never happens. You do tech debt cleanup “as you go”. Slip in a few tickets to cleanup specific things and have a policy to update code that is touched when adding features / fixing bugs.
It needs to be a continual cleaning process. That’s why it’s called debt - the longer you let it go un-paid the harder it is to do.


After years of running a rolling distro (gentoo) I had come to realize that it was a bit of a distinction without a difference. Major updates simply felt less planned than a ‘traditional’ distro.


You still get “major releases” with rolling distros. They’re just smaller. Updating to new plasma/gnome versions, new glibc, etc.


Terraform, ansible and kubernetes (microk8s).
K8s in particular has been a huge change to simplifying my network despite the complexities involved and the initial learning curve. Deploying and updating services is much easier now.


It’s not GitHub. It’s Microsoft. Never forget that.


Not even close.


So going back to the title, what to study? Maybe some specific book? Private classes/courses?
Networking. If you want to understand the reasoning behind things this is where you start. A good foundation in tcp/ip, the 7 layer network stack, as well as basic network protocols (dns, dhcp, http, etc.) will go a long way toward helping you troubleshoot when things go wrong.
Maybe throw in some operating systems study as well for when you start to use docker.
Yeah, containers can be quite useful for experimenting. Distrobox also if you need a more desktop-like experience.