

It’s you who says it’s not a literal use. But I’m protesting even a figurative use since there is NO way the act is THEFT. I didn’t steal, in any sense, something that is given to me for free.
It’s you who says it’s not a literal use. But I’m protesting even a figurative use since there is NO way the act is THEFT. I didn’t steal, in any sense, something that is given to me for free.
It can’t be any sort of “theft” if you leave it on the curb with a sign saying “Free” next to it.
The intent of the BSD licences is to allow you to do what you want without reciprocating though. It’s not an accident, it’s explicitly stated. It is, in fact, your right. You profiting from the work of others is an intended result.
I prefer GPL myself for this reason. But you can’t blame companies for obeying the terms of the licence.
They chose to base them on BSD so they could steal work and not give back to the public.
Emphasis mine.
But it’s not stealing then is it?
They chose to base them on BSD so they could steal work and not give back to the public.
“Here you can use this as you like, no questions asked”
“Hey! Why did you use that in a way that I told you you could!?!?”
So how can I as a new user make sure to have the most secure machine as possible?
That’s not what you want. You want a reasonable level of confidence that your system is secure.
The process is similar to Windows - keep it up-to-date, use good passwords, don’t run things as root (admin), and don’t install things that are questionable.
The package manager under linux is where you should start, and that varys by distro some. But generally speaking things installed from there are “safe” and will be updated by the package manager when you do updates.
NT even “back in the day” was very much NOT compatible with DOS.
Like those ‘curl | sudo bash’ abominations that have become strangely popular lately.
“I run an immutable distro, BTW”
Proxmox or Docker?
It’s not mutually exclusive? I have a 3-node proxmox config on which I have 3 VMs running as kubenetes nodes to which I deploy containers. I also have some VMs setup for things which either don’t work well as containers or which I simply don’t want as containers (e.g. a couple Windows VMs for doing Windows things). Also home assistant runs in a VM since it was just easier to do USB passthrough this way.
I understand that running things in a VM provides better security than running them in a container.
Not sure what you mean by this - containers are typically easier to secure as they’re minimalist. But I doubt anyone is using VMs because they think they’re more secure.
And I still don’t care. Bad is bad even if a community is doing it.
Edit: Sorry if that was aggressive. This is a horrible practice and that community is the worst. They use HTTP by default? Encourage running scripts pointing to GH repositories controlled by community members? It’s just aching for the sort of supply-chain attacks we’re seeing with things like NPM has been enduring.
I have a very no-exceptions rule about encouraging people to do a curl|bash
install and would just remove that. Provide a link to the script, people can run it if they want. Encouraging the behavior of just directly running scripts off the internet is a bad habit.
In your Proxmox console, enter the following command: bash -c "$(curl -fsSL https://raw.githubusercontent.com/…)
Do not do this. Never run scripts like this directly without inspecting them first. Do not tell people to run your exciting new script like this. Provide a link to the script and encourage users to inspect it first then run it.
Same? HTTP/1.1 ran the entire internet for 20 years and is used by a ton of sites. It’s fine for a personal website.
There is zero question about it. It will be absolutely fine for some dude’s static website over a residential internet connection.
HTTP 1.1 is more than good enough for serving a static website.
Read the comments. Self hosters are little more than users anyway.
At this point I’m just happy if they’re all using a dark theme at least.
Since it’s a public instance you’d want to be sure to keep it pretty up-to-date with new system patches and the latest stable versions of Nextcloud. If you’re comfortable with automating updates with ansible, k8s, docker-compose, etc. then it’s not a big deal. If you’re ssh’ing to a server to manually update things then it’s going to be a lot of overhead and likely forgotten.
Old hardware may also bring its own issues and you’ll need backups especially since old hardware (especially consumer-grade stuff) can fail very unexpectedly. And providing support for users is a whole… other thing…
I like the idea of starting with the “old laptop in a basement” approach as a way to get things going to see if the service provides benefit then look to migrate to a more stable platform in the future.