

that doesn’t make them bad people, just addicted people.
what makes some of them bad people is when they are actively campaigning against Linux to cope with their insecurities.


that doesn’t make them bad people, just addicted people.
what makes some of them bad people is when they are actively campaigning against Linux to cope with their insecurities.
I heard blue iris can be run with wine on linux


Doesn’t work. Try to do it without giving them a phone number or installing some other application. You can’t. Or I couldn’t.
how did you try? did you try registering a mozilla account?
You’ll find out Mozilla has problems with sign in.
It’s probably temporary, it doesn’t have problems usually
And it presumes that I shouldn’t be able to control my own browser.
they do it so that malware cant install unvetted addons to your browser. and if someone signs a malware addon this way, and some people report it, mozilla can disable it for everyone.
We need a new firefox - just like the original firefox showed that Mozilla was bloat and dumb, we need another that shows the current is bloat and dumb.
what you need is a footgun. you have it in about:config.
jpeg xl! wow! that’s cool!


separately, part by part. if they had a laptop they would have needed to buy at least 6 complete laptops by that time, or more realistically, give up on upgrades.


don’t enable unsigned extensions. It’s there for good reason.
upload your addon to addons.mozilla.org. there’s an option to not publish, but only upload for signing. then you’ll get back a signed xpi you can install properly.


but how will we discover the compromised servers when the company running it did not announce it loudly?
and even then the bigger problem could be the advertiser users. lots of moderation capacity would be needed, or some kind of flagging automatism, but as we seen with Piefed people are hating even just milder such things.


that’s why I believe that in a normal world, renting should only be needed for temporary hardware. a temporary home, a temporary vehicle, a temporary server, temporary storage…
but with homeservers you could have your own once you can afford it, and you could request help from contractors for the urgent type of maintenance. replacing failing disks in the array, minimal monitoring so that they can keep an eye on when that happens, maybe critical updates…


who cares about trump here. he’s just the catalyst that started the process we long needed. american big tech has for much longer been parasitic and anticonsumer, and that’s not on trump. every administration before that was fine with it too.
yes you violated, and that’s out of question.
that’s peertube too


I guess it’s just google sans, they use this placeholder elsewhere too


oh, LXC containers! I see. I never used them because I find LXC setup more complicated, once tried to use a turnkey samba container but couldn’t even figure out where to add the container image to LXC, or how to start if not that way.
but also, I like that this way my random containerized services use a different kernel, not the main proxmox kernel, for isolation.
Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral.
I don’t understand this point. on docker, it’s rare that you need to touch the Dockerfile (which contains the container image build instructions). did you mean the docker compose file? or a script file that contains a docker run command?
also, you can run commands or open a shell in any container with docker, except if the container image does not contain any shell binary (but even then, copying a busybox or something to a volume of the container would help), but that’s rare too.
you do it like this: docker exec -it containername command. bit lengthy, but bash aliases help
Also for the over committing thing, be aware that your issue you’ve stated there will happen with a Docker setup as well. Docker doesn’t care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.
in docker I don’t allocate memory, and it’s not common to do so. it shares the system memory with all containers. docker has a rudimentary resource limit thingy, but what’s better is you can assign containers to a cgroup, and define resource limits or reservations that way. I manage cgroups with systemd “.slice” units, and it’s easier than it sounds


the PRC is not nearly an alternative to the US empire, its a replacement to it, with different but overlapping trade-offs. what about neither? It’s seriously like instagram users fleeing to tiktok, then to upscroll or whichever other corporate platform.
the bad of the US does not make the PRC good. I want change, big changes, but definitely not that kind.


lemmy.ml is just like that, maybe you want to look for a new home instance.


just know that sometimes their buggy frontend loads the analytics code even if you have opted outm there’s an ages old issue of this on their github repo, closed because they don’t care.
It’s matomo analytics, so not as bad as some big tech, but still.


unless you have zillion gigabytes of RAM, you really don’t want to spin up a VM for each thing you host. the separate OS-es have a huge memory overhead, with all the running services, cache memory, etc. the memory usage of most services can largely vary, so if you could just assign 200 MB RAM to each VM that would be moderate, but you can’t, because when it will need more RAM than that, it will crash, possibly leaving operations in half and leading to corruption. and to assign 2 GB RAM to every VM is waste.
I use proxmox too, but I only have a few VMs, mostly based on how critical a service is.


Honestly, this is the kind of response that actually makes me want to stop self hosting. Community members that have little empathy.
why. it was not telling that they should quit self hosting. it was not condescending either, I think. it was about work.
but truth be told IT is a very wide field, and maybe that generalization is actually not good. still, 15 containers is not much, and as I see it they help with not letting all your hosted software make a total mess on your system.
working with the terminal sometimes feels like working with long tools in a narrow space, not being able to fully use my hands, but UX design is hard, and so making useful GUIs is hard and also takes much more time than making a well organized CLI tool.
in my experience the most important here is to get used to common operations in a terminal text editor, and find an organized directory structure for your services that work for you. Also, using man pages and --help outputs. But when you can afford doing it, you could scp files or complete directories to your desktop for editing with a proper text editor.


What needs more than 1gbe? Are you streaming 8k?
I think they wanted to mean it was a bottleneck while moving to the new hardware


its that even their app installs is driven by recommendation algorithms of the app store. google will never recommend social media apps to people that don’t try to destroy society
joke on you! google’s recent requirement is that all phone vendors make the power button open an AI menu instead of the shutdown menu! on most phones it can be fixed, but it’s often hidden very deep in the settings.