Funny you should say that, because…
Funny you should say that, because…
Real wizards use ed
Debian Testing has a lot more current packages, and is generally fairly stable. Debian Unstable is rolling release, and mostly a misnomer (but it is subject to massive changes at a moment’s notice).
Fedora is like Debian Testing: a good middleground between current and stable.
I hear lots of good things about Nix, but I still haven’t tried it. It seems to be the perfect blend of non-breaking and most up-to-date.
I’ll just add to: don’t believe everything you hear. Distrowars result in rhetoric that’s way blown out of proportion. Arch isn’t breaking down more often than a cybertruck, and Debian isn’t so old that it yearns for the performance of Windows Vista.
Arch breaks, so does anything that tries to push updates at the drop of a hat; it’s unlikely to brick your pc, and you’ll just need to reconfigure some settings.
Debian is stable as its primary goal, this means the numbers don’t look as big on paper; for that you should be playing cookie clicker, instead of micromanaging the worlds’ most powerful web browser.
Try things out for yourself and see what fits, anyone who says otherwise is just trying to program you into joining their culture war
It will cause a critical error during boot if the device isn’t given the nofail
mount option, which is not included in the defaults
option, and then fails to mount. For more details, look in the fstab(5)
man page, and for even more detail, the mount(8)
man page.
Found that out for myself when not having my external harddrive enclosure turned on with a formatted drive in it caused the pc to boot into recovery mode (it was not the primary drive). I had just copy-pasted the options from my root partition, thinking I could take the shortcut instead of reading documentation.
There’s probably other ways that a borked fstab can cause a fail to boot, but that’s just the one I know of from experience.
To the feature creep: that’s kind of the point. Why have a million little configs, when I could have one big one? Don’t answer that, it’s rhetorical. I get that there are use cases, but the average user doesn’t like having to tweak every component of the OS separately before getting to doom-scrolling.
And that feature creep and large-scale adoption inevitably has led to a wider attack surface with more targets, so ofc there will be more CVEs, which—by the way—is a terrible metric of relative security.
You know what has 0 CVEs? DVWA.
You know what has more CVEs and a higher level of privilege than systemd? The linux kernel.
And don’tme get started on how bughunters can abuse CVEs for a quick buck. Seriously: these people’s job is seeing how they can abuse systems to get unintended outcomes that benefit them, why would we expect CVEs to be special?
TL;DR: That point is akin to Trump’s argument that COVID testing was bad because it led to more active cases (implied: being discovered).
I’m gonna laugh if it’s something as simple as a botched fstab config.
In the past, it’s usually been the case that the more ignorant I am about the computer system, the stronger my opinions are.
When I first started trying out Linux, I was pissed at it and would regularly rant to anyone who would listen. All because my laptop wouldn’t properly sleep: it would turn off, then in a few minutes come back on; turns out the WiFi card had a power setting that was causing it to wake the computer up from sleep.
After a year of avoiding the laptop, a friend who was visiting from out of town and uses Arch btw took one look at it, diagnosed and fixed it in minutes. I felt like a jackass for blaming the linux world for intel’s non-free WiFi driver being shit. (in my defense, I had never needed to toggle this setting when the laptop was originally running Windows).
The worst part is that I’m a sysadmin, diagnosing and fixing computer problems should be my specialty. Instead I failed to put in the minimum amount of effort and just wrote the entire thing off as a lost cause. Easier then questioning my own infallibility, I suppose.
You intentionally do not want people that you consider “below” you to use Linux or even be present in your communities.
No, but I do want my communities to stay on-topic and not be derailed by Discourse™
Who I consider beneath me is wholly unrelated to their ability to use a computer, and entirely related to their ability to engage with others in a mature fashion, especially those they disagree with.
Most people use computers to get something done. Be it development, gaming, consuming multimedia, or just “web browsing”
I realize most people use computers for more than web-browsing, but ask anybody who games, uses multimedia software, or develops how often they have issues with their workflow.
(which you intentionally use to degrade people “just” doing that)
No I don’t. Can you quote where I did so, or is it just a vibe you got when reading in the pretentious dickwad tone you seem to be projecting onto me?
But stop trying to gatekeep people out of it
I’m not, you’re projecting that onto me again. If you want to use Linux, use Linux. Come here and talk about how you use Linux, or ask whatever questions about Linux you want. If you don’t want to use Linux, or don’t want to to talk about Linux, take it to the appropriate community.
If keeping communities on-topic and troll-free is “gatekeeping,” then I don’t give a fuck how you feel about it.
I don’t think we do, but that’s a feature, not a bug. Here’s why:
There was a great post a few days ago about how Linux is a digital 3rd Space. It’s about spending time cultivating the system and building a relationship with it, instead of expecting it to be transparent while you use it. This creates a positive relationship with your computer and OS, seeing it as more a labor of love than an impediment to being as productive as possible (the capitalist mindset).
Nothing “just works.” That’s a marketing phrase. Windows and Mac only “just work” if the most you ever do is web-browsing and note-taking in notepad. Anything else and you incite cognitive dissonance: hold onto the delusion at the price of doing what you’re trying to do, or accept that these systems aren’t as good as their marketing? The same thread I mentioned earlier talked about how we give Linux more lenience because of the relationship we have with it, instead of seeing it as just a tool for productivity.
Having a barrier of entry keeps general purpose communities like this from being flooded with off-topic discourse that achieves nothing. And no, I’m not just talking about the Yahoo Answers-level questions like “how to change volume Linux???” Think stuff like “What’s the most stargender-friendly Linux distro?” and “How do we make Linux profitable?” and “what Linux distro would Daddy Trump use?” and “where my other Linux simping /pol/t*rds at (socialist Stallman****rs BTFO)???” Even if there is absolutely perfect moderation and you never see these posts directly, these people would still be coming in and finding ways that skirt the rules to inject this discourse into these communities; and instead of being dismissed as trolls, there would be many, many people who think we should hear them out (or at least defend their right to Free Speech).
Finally, it already “just works” for the aforementioned note-taking and web-browsing. The only thing that’s stopping more not so tech-savvy people is that it’s not the de facto pre-installed OS on the PC you pick up from Best Buy (and not Walmart, because you want people to think you’re tech-savvy, so you go to the place with a dedicated “geek squad”). The only way it starts combating Windows in this domain is by marketing agreements with mainstream hardware manufacturers (like Dell and HP); this means that the organization responsible for representing Linux would need the money to make such agreements… Which would mean turning it into a for-profit OS. Which would necessitate closing the source. Which would mean it just becomes another proprietary OS that stands for all that Linux is against.
Debian is the best and I don’t know what to do with it
1337 case = k3wlf1l3n4m3
Nano is notepad, but with worse mouse integration. It’s Vim/Emacs without any of the features. It’s the worst of both worlds
If you want ease, just use a GUI notepad. If you want performance boosts, suck it up and learn Emacs or Neovim
What the fuck.
This is your brain on racism.
This is a hypothetical that has no clear bearing connection to common practice.
In other words, I could just reverse this to contradict it and have equal weight to my hypothetical: devs should always use GPL, because if their software gets widely adopted to the point where companies are forced to use it, it’s better that it’s copyleft.
Well, ideally you’re choosing your license based on the cases where it differs from others and not the majority of times where it doesn’t make a difference.
Someone aiming to make Free software should use a copyleft license that protects the four freedoms, instead of hoping people abide by the honor system.
Also, no one can 100% accurately predict which of their projects will get big. Sure, a radical overhaul of TCP has good odds, but remember left-pad? Who could have foreseen that? Or maybe the TCP revision still never makes it big: QUIC and HTTP/3 are great ideas, and yet they are still struggling to unseat HTTP/2 as the worldwide standard.
You’ve defined yourself into an impossible bind: you want something extremely portable, universal but with a small disk imprint, and you want it to be general purpose and versatile.
The problem is that to be universal and general purpose, you need a lot of libraries to interact with whatever type of systems you might have it on (and the peculiarities of each), and you need libraries that do whatever type of interactions with those systems that you specify.
E.g. under-the-hood, python’s open("<filename>", 'r')
is a systemcall to the kernel. But is that Linux? BSD? Windows NT? Android? Mach?
What if you want your script to run a CLI command in a subshell? Should it call “cmd”? or “sh”? or “powershell”? Okay, okay, now all you need it to do is show the contents of a file… But is the command “cat” or “type” or “Get-FileContents”?
Or maybe you want to do more than simple read/write to files and string operations. Want to have graphics? That’s a library. Want serialization for data? That’s a library. Want to read from spreadsheets? That’s a library. Want to parse XML? That’s a library.
So you’re looking at a single binary that’s several GBs in size, either as a standalone or a self-extracting installer.
Okay, maybe you’ll only ever need a small subset of libraries (basic arithmetic, string manipulation, and file ops, all on standard glibc gnu systems ofc), so it’s not really “general purpose” anymore. So you find one that’s small, but it doesn’t completely fit your use case (for example, it can’t parse uci config files); you find another that does what you need it to, but also way too much and has a huge footprint; you find that perfect medium and it has a small, niche userbase… so the documentation is meager and it’s not easy to learn.
At this point you realize that any language that’s both easy to learn and powerful enough to manage all instances of some vague notion of “computer” will necessarily evolve to being general purpose. And being general purpose requires dependencies. And dependencies reduce portability.
At this point your options are: make your own language and interpreter that does exactly what you want and nothing more (so all the dependencies can be compiled in), or decide which criteria you are willing to compromise on.
This can be handled pretty much entirely on the host by configuring your qemu settings; it’s got very robust virtual networking options. Basically just expose the host’s VPN interface (e.g. usually called something like tun
) for VPN access, and make a separate virtual interface that only the host and guest can access for the stuff like ssh.
Here’s the qemu wiki about networking, definitely where you should start
I have a Libre LePotato, Pinebook and Pinephone. They’re fine for most of my use cases, but they don’t handle games too well. They are also not great for VMs or emulation, and no chance in hell would I use any for my home media server.
That being said, I’m starting to see ARM CPU desktops in my feeds, and I think one of those would be fine for everything but gaming (which is more an issue of the availability of native binaries and not necessarily outright performance). TBH at that price point, using off-chip memory and GPU, I don’t see much reason to go with ARM; maybe the extra cores, but I can’t imagine there is much in the way of electrical efficiency that SoCs entail.
I’ve been running Debian stable on my decade-old desktop for about 3 years, and on my ideapad that’s just as old for about 5. During that time I had an update break something only once, and it was the Nvidia driver what did it. A patch was released within a three days.
Debian epitomizes OS transparency for me. Sure, I can still customize the hell out of it and turn it into a frankenix machine, but if I don’t want to, I can be blissfully unaware of how my OS works, and focus only on important computing tasks (like mindlessly scrolling lemmy at 2 am).
I use virt-manager. Works better than virtualbox did at the time (back while v6.1 was still the main release branch), it’s easier, and it doesn’t involve hitching yourself to Oracle.
VMWare may be “free,” but it ain’t free. And if you don’t care about software freedom, why choose Linux over Windows or MacOS? Also, Workstation Player lacks a lot of functionality that makes it not good as a hypervisor. Only one VM can be powered at a time, and all the configuration is severely limited. Plus the documentation is mediocre compared to the official virt-manager docs.
imagemagick for basic transformations/compression/conversions, CLI (locally hosted) AI for the shops