

No problem. It was an interesting question that made me curious too.
Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.


No problem. It was an interesting question that made me curious too.


https://stackoverflow.com/questions/30869297/difference-between-memfree-and-memavailable
Rik van Riel’s comments when adding MemAvailable to /proc/meminfo:
/proc/meminfo: MemAvailable: provide estimated available memory
Many load balancing and workload placing programs check /proc/meminfo to estimate how much free memory is available. They generally do this by adding up “free” and “cached”, which was fine ten years ago, but is pretty much guaranteed to be wrong today.
It is wrong because Cached includes memory that is not freeable as page cache, for example shared memory segments, tmpfs, and ramfs, and it does not include reclaimable slab memory, which can take up a large fraction of system memory on mostly idle systems with lots of files.
Currently, the amount of memory that is available for a new workload, without pushing the system into swap, can be estimated from MemFree, Active(file), Inactive(file), and SReclaimable, as well as the “low” watermarks from /proc/zoneinfo.
However, this may change in the future, and user space really should not be expected to know kernel internals to come up with an estimate for the amount of free memory.
It is more convenient to provide such an estimate in /proc/meminfo. If things change in the future, we only have to change it in one place.
Looking at the htop source:
https://github.com/htop-dev/htop/blob/main/MemoryMeter.c
/* we actually want to show "used + shared + compressed" */
double used = this->values[MEMORY_METER_USED];
if (isPositive(this->values[MEMORY_METER_SHARED]))
used += this->values[MEMORY_METER_SHARED];
if (isPositive(this->values[MEMORY_METER_COMPRESSED]))
used += this->values[MEMORY_METER_COMPRESSED];
written = Meter_humanUnit(buffer, used, size);
It’s adding used, shared, and compressed memory, to get the amount actually tied up, but disregarding cached memory, which, based on the above comment, is problematic, since some of that may not actually be available for use.
top, on the other hand, is using the kernel’s MemAvailable directly.
https://gitlab.com/procps-ng/procps/-/blob/master/src/free.c
printf(" %11s", scale_size(MEMINFO_GET(mem_info, MEMINFO_MEM_AVAILABLE, ul_int), args.exponent, flags & FREE_SI, flags & FREE_HUMANREADABLE));
In short: You probably want to trust /proc/meminfo’s MemAvailable, (which is what top will show), and htop is probably giving a misleadingly-low number.


If oomkiller starts killing processes, then you’re running out of memory.
Well, you could want to not dig into swap.


There might be some way to make use of it.
Linux apparently can use VRAM as a swap target:
https://wiki.archlinux.org/title/Swap_on_video_RAM
So you could probably take an Nvidia H200 (141 GB memory) and set it as a high-priority swap partition, say.
Normally, a typical desktop is liable to have problems powering an H200 (600W max TDP), but that’s with all the parallel compute hardware active, and I assume that if all you’re doing is moving stuff in and out of memory, it won’t use much power, same as a typical gaming-oriented GPU.
That being said, it sounds like the route on the Arch Wiki above is using vramfs, which is a FUSE filesystem, which means that it’s running in userspace rather than kernelspace, which probably means that it will have more overhead than is really necessary.
EDIT: I think that a lot will come down to where research goes. If it turns out that someone figures out that changing the hardware (having a lot more memory, adding new operations, whatever) dramatically improves performance for AI stuff, I suspect that current hardware might get dumped sooner rather than later as datacenters shift to new hardware. Lot of unknowns there that nobody will really have the answers to yet.
EDIT2: Apparently someone made a kernel-based implementation for Nvidia cards to use the stuff directly as CPU-addressable memory, not swap.
https://github.com/magneato/pseudoscopic
In holography, a pseudoscopic image reverses depth—what was near becomes far, what was far becomes near. This driver performs the same reversal in compute architecture: GPU memory, designed to serve massively parallel workloads, now serves the CPU as directly-addressable system RAM.
Why? Because sometimes you have 16GB of HBM2 sitting idle while your neural network inference is memory-bound on the CPU side. Because sometimes constraints breed elegance. Because we can.
Pseudoscopic exposes NVIDIA Tesla/Datacenter GPU VRAM as CPU-addressable memory through Linux’s Heterogeneous Memory Management (HMM) subsystem. Not swap. Not a block device. Actual memory with struct page backing, transparent page migration, and full kernel integration.
I’d guess that that’ll probably perform substantially better.
It looks like they presently only target older cards, though.


This world is getting dumber and dumber.
Ehhh…I dunno.
Go back 20 years and we had similar articles, just about the Web, because it was new to a lot of people then.
searches
https://www.belfasttelegraph.co.uk/news/internet-killed-my-daughter/28397087.html
Internet killed my daughter
Were Simon and Natasha victims of the web?
Predators tell children how to kill themselves
And before that, I remember video games.
It happens periodically — something new shows up, and then you’ll have people concerned about any potential harm associated with it.
https://en.wikipedia.org/wiki/Moral_panic
A moral panic, also called a social panic, is a widespread feeling of fear that some evil person or thing threatens the values, interests, or well-being of a community or society.[1][2][3] It is “the process of arousing social concern over an issue”,[4] usually elicited by moral entrepreneurs and sensational mass media coverage, and exacerbated by politicians and lawmakers.[1][4] Moral panic can give rise to new laws aimed at controlling the community.[5]
Stanley Cohen, who developed the term, states that moral panic happens when “a condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests”.[6] While the issues identified may be real, the claims “exaggerate the seriousness, extent, typicality and/or inevitability of harm”.[7] Moral panics are now studied in sociology and criminology, media studies, and cultural studies.[2][8] It is often academically considered irrational (see Cohen’s model of moral panic, below).
Examples of moral panic include the belief in widespread abduction of children by predatory pedophiles[9][10][11] and belief in ritual abuse of women and children by Satanic cults.[12] Some moral panics can become embedded in standard political discourse,[2] which include concepts such as the Red Scare[13] and terrorism.[14]
Media technologies
Main article: Media panic
The advent of any new medium of communication produces anxieties among those who deem themselves as protectors of childhood and culture. Their fears are often based on a lack of knowledge as to the actual capacities or usage of the medium. Moralizing organizations, such as those motivated by religion, commonly advocate censorship, while parents remain concerned.[8][40][41]
According to media studies professor Kirsten Drotner:[42]
[E]very time a new mass medium has entered the social scene, it has spurred public debates on social and cultural norms, debates that serve to reflect, negotiate and possibly revise these very norms.… In some cases, debate of a new medium brings about – indeed changes into – heated, emotional reactions … what may be defined as a media panic.
Recent manifestations of this kind of development include cyberbullying and sexting.[8]
I’m not sure that we’re doing better than people in the past did on this sort of thing, but I’m not sure that we’re doing worse, either.


I’m kind of surprised that it costs $2 million to put a new tank in. I’d have thought it’d be less.
https://en.wikipedia.org/wiki/We_Didn't_Start_the_Fire
“We Didn’t Start the Fire” is a song written by American musician Billy Joel.
Joel conceived the idea for the song when he had just turned 40. He was in a recording studio and met a 21-year-old friend of Sean Lennon who said “It’s a terrible time to be 21!”. Joel replied: “Yeah, I remember when I was 21 – I thought it was an awful time and we had Vietnam, and y’know, drug problems, and civil rights problems and everything seemed to be awful”. The friend replied: “Yeah, yeah, yeah, but it’s different for you. You were a kid in the fifties and everybody knows that nothing happened in the fifties”. Joel retorted: “Wait a minute, didn’t you hear of the Korean War or the Suez Canal Crisis?” Joel later said those headlines formed the basic framework for the song.[4]
https://www.youtube.com/watch?v=eFTLKWw542g
🎵 We didn’t start the fire 🎵
🎵 It was always burning since the world’s been turning 🎵
🎵 We didn’t start the fire 🎵
🎵 No, we didn’t light it, but we tried to fight it 🎵


!patientgamers@sh.itjust.works looked smug as hell. They’d been telling everyone for years.


Summary created by Smart Answers AI
chuckles


And why Bash and not another shell?
I chose it for my example because I happen to use it. You could use another shell, sure.
Should we consider “throwaway” anything that supports interactive mode of your daily driver you chose in your default terminal prompt?
Interactive mode is a good case for throwaway code, but one-off scripts would also work.


The point I’m making is that bash is optimized for quickly writing throwaway code. It doesn’t matter if the code written blows up in some case other than the one you’re using. You don’t need to handle edge cases that don’t apply to the one time that you will run the code. I write lots of bash code that doesn’t handle a bunch of edge cases, because for my one-off use, that edge case doesn’t arise. Similarly, if an LLMs is generating code that misses some edge case, if it’s a situation that will never arise, and that may not be a problem.
EDIT: I think maybe that you’re misunderstanding me as saying “all bash code is throwaway”, which isn’t true. I’m just using it as an example where throwaway code is a very common, substantial use case.


I don’t know: it’s not just the outputs posing a risk, but also the tools themselves
Yeah, that’s true. Poisoning the training corpus of models is at least a potential risk. There’s a whole field of AI security stuff out there now aimed at LLM security.
it shouldn’t require additional tools, checking for such common flaws.
Well, we are using them today for human programmers, so… :-)


No problem; I remember being delighted to learn that there was a name for the thing, years back.
Also, one other comment regarding the “change away from mechanical toggle”. If you got the machine pre-built, you may never have noticed this, but on ATX motherboards, there’s a set of pins which you fit the power and reset switch wires onto.

I mean, you can plug whatever you feel like onto those pins and stick your power and reset buttons wherever you feel like, if you don’t like the position of the existing case switch. It’s just a momentary switch. You can grab replacement ones that aren’t built into a case:
https://www.amazon.com/Warmstor-2-Pack-Computer-Supply-27-inch/dp/B074XDTVN1
Or even just get your own switches and connect the plug and wires to whatever sort of momentary switch you want. Amazon or Mouser or DigiKey will have all sorts of momentary switches.


Security is where the gap shows most clearly
So, this is an area where I’m also pretty skeptical. It might be possible to address some of the security issues by making minor shifts away from a pure-LLM system. There are (conventional) security code-analysis tools out there, stuff like Coverity. Like, maybe if one says “all of the code coming out of this LLM gets rammed through a series of security-analysis tools”, you catch enough to bring the security flaws down to a tolerable level.
One item that they highlight is the problem of API keys being committed. I’d bet that there’s already software that will run on git-commit hooks that will try to red-flag those, for example. Yes, in theory an LLM could embed them into code in some sort of obfuscated form that slips through, but I bet that it’s reasonable to have heuristics that can catch most of that, that will be good-enough, and that such software isn’t terribly difficult to write.
But in general, I think that LLMs and image diffusion models are, in their present form, more useful for generating output that a human will consume than that a CPU will consume. CPUs are not tolerant of errors in programming languages. Humans often just need an approximately-right answer, to cue our brains, which itself has the right information to construct the desired mental state. An oil painting isn’t a perfect rendition of the real world, but it’s good enough, as it can hint to us what the artist wanted to convey by cuing up the appropriate information about the world that we have in our brains.
This Monet isn’t a perfect rendition of the world. But because we have knowledge in our brain about what the real world looks like, there’s enough information in the painting to cue up the right things in our head to let us construct a mental image.

Ditto for rough concept art. Similarly, a diffusion model can get an image approximately right — some errors often just aren’t all that big a deal.
But a lot of what one is producing when programming is going to be consumed by a CPU that doesn’t work the way that a human brain does. A significant error rate isn’t good enough; the CPU isn’t going to patch over flaws and errors itself using its knowledge of what the program should do.
EDIT:
I’d bet that there’s already software that will run on git-commit hooks that will try to red-flag those, for example.
Yes. Here are instructions for setting up trufflehog to run on git pre-commit hooks to do just that.
EDIT2: Though you’d need to disable this trufflehog functionality and have some out-of-band method for flagging false positives, or an LLM could learn to bypass the security-auditing code by being trained on code that overrides false positives:
Add trufflehog:ignore comments on lines with known false positives or risk-accepted findings


I keep seeing the “it’s good for prototyping” argument they post here, in real life.
There are real cases where bugs aren’t a huge deal.
Take shell scripts. Bash is designed to make it really fast to write throwaway, often one-line software that can accomplish a lot with minimal time.
Bash is not, as a programming language, very optimized for catching corner cases, or writing highly-secure code, or highly-maintainable code. The great majority of bash code that I have written is throwaway code, stuff that I will use once and not even bother to save. It doesn’t have to handle all situations or be hardened. It just has to fill that niche of code that can be written really quickly. But that doesn’t mean that it’s not valuable. I can imagine generated code with some bugs not being such a huge problem there. If it runs once and appears to work for the inputs in that particular scenario, that may be totally fine.
Or, take test code. I’m not going to spend a lot of time making test code perfect. If it fails, it’s probably not the end of the world. There are invariably cases that I won’t have written test code for. “Good enough” is often just fine there.
And it might be possible to, instead of (or in addition to) having human-written commit messages, generate descriptions of commits or something down the line for someone browsing code.
I still feel like I’m stretching, though. Like…I feel like what people are envisioning is some kind of self-improving AI software package, or just letting an LLM go and having it pump out a new version of Microsoft Office. And I’m deeply skeptical that we’re going to get there just on the back of LLMs. I think that we’re going to need more-sophisticated AI systems.
I remember working on one large, multithreaded codebase where a developer who isn’t familiar with or isn’t following the thread-safety constraints would create an absolute maintenance nightmare for others, where you’re going to spend way more time tracking down and fixing breakages induced than you saved by them not spending time coming up to speed on the constraints that their code needs to conform to. And the existing code-generation systems just aren’t really in a great position to come up to speed on those constraints. Part of what a programmer does is, when writing code, is to look at the human-language requirements, and identify that there are undefined cases and go back and clarify the requirement with the user, or use real-world knowledge to make reasonable calls. Training an LLM to map from an English-language description to code is creating a system that just doesn’t have the capability to do that sort of thing.
But, hey, we’ll see.


I apparently actually did two of these on different occasions, using different, restricted Unicode character ranges (ones that only look at the value of the character as a whole, no subpixel rendering). Can’t find the (newer) color one, but the black-and-white one:
░
░░░░░░░
░▒▒▒▒▒▒▒░░░
░▒▓▓▓▒▒▒░░░░░
░▒▓▓▓▓▒▒▒▒▒░░░░░ ░░░
░▒▓▓▓▓▓▓▓▒▒▒▒▒▒▒░░ ░░░░░░░░░░░░░
░▒▓▓███▓▓▓▓▒▒▒▒▒▒▒░░ ░░░▒▒▒▒▒▒▒▒▒▒▒░░
░▒▓▓▓████▓▓▓▒▒▒▒▒▓▒▒░░ ░░░░░░▒▒▒▒▒▒▒▒▒▒▓▓▒▒░
░▒▓▓▓███▓▓▒▒▒▒▒▒▒▒▒▒░░░░░░░░░░░░░░░░░░░ ░░▒▒▒▒▒▒▒▒▒▒▒▒▒▓▓▓▓▓▒░
░▒▓▓▓█▓▓▓▒▒▒▒▒▒▒▒▒▒▒░░░░░░░░░░░░░░░▒▒▒░░░░░░░░░░░░░▒▒▓▓▓▓▓▓▒▒▒▒▓▓▓▓▓▓▓▒░
▒▓▓▓██▓▓▓▒▒▒▒▒▒▓▓▓▒▒░░░░▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░░░▒▒▒▓▓▓▓▓▒▒▒▓▓▓▓▓██▓▓▓▒░
░▒▓▓▓████▓▓▓▓▓▓▓▓▓▓▒░░░▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░▒▒▒▒▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓█▓▓▓▒░
░▒▒▓▓▓████▓▓▓▓▓▒▒▒▒▒▒░▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░▒▓▓▓▓▓▓▒▒▒▓▓▓▓▓▓▓▓▒░
░░▒▓▓▓▓██▓▓▓▒▒▒▒▒░░░░░░▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░▒▒▒▓▓▓▓▓▓▓▓▓▓▓▓█▓▓▓░░
░▒▓▓▓▓▓▓▓▒▒░░░░░░░░░░▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░░░▒▒▒▒▓▓▓▓███▓▓▓▒░
░▒▓▓▓▒▒░░░░░░░░░░▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░░░░░░▒▒▓▓██▓▓▓▓▒░
▒▒▒░░░░░░░░░░░░░░░▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░░░░░░▒▒▓▓▓▓▓▓▒░
░░░░░░░░░░░░░░▒░░░░░▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░░░░░░░░▒▒▒▓▓▓▒░
░▒▒▒░░░░▒▒▒▒░░▒▒▒▒▒▒▒▒▓▓▓▓▒▒▒▒▒▒▒▒▒▒▓▓▒▒▒▒░░░░░░░░░░▒▒▒▒▒░
░░▒▒▒▒░▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▓▓▓▓▓▒▒▒▒▒▒░▒▒▒▓▓▓▒▒▒▒▒░░░░░░░░░░▒▒░
░░▒▒▒▒▒▒▒▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒▒▒▒▒▒▒▒▒▓▓▓▓▒▒▒▒▒░░░░░░░░░▒░░
░░▒▒▒▒▓▓▓▓▓▓▓▒▒▒▒▓▓▓████▓▓▒▒░░░░▒▒▒▒▒▒▒▓▓▓▓▓▓▓▓▓▓▒▒▒░░░░▒░░
░▒▒▒▒▒▓▓▓▓▓▓▓▓▒▒▒▒░░░▒▒▓▓▓▓▒▒░░░░░░░░░▒▓▓████▓▓▓▓▒▒▒▒▒▒░░░░░
░░▒▒▒▓▓▓▓▓▒▒▒▓▓▓▓▓▒▒▒░░░▒▒▓▒▒░░░░░░░░░░░▒▓▓▓▒▒▒▒▒░░▒▒▒▒▒▒░▒░
░▒▒▓▓▓▒▒▒▓▓▓▓▓▓▓▓▓▒▒▒▒▒▒▒▒░░░░░░░░░░░░░▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░
░▒▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▒▒▒░░░░░░░░░░░░░░▒▒▒▒▒▒▓▓▓▒▒░▒▒▒▒░
░▒▓▓▓▓▓▓▓▓▓▓▓▒▒▒▒▓▓▓▒▒▒░░░░░░░░░░░░░░▒▒▒▒▒▒▒▓▓▒▒▒░░▒▒▒░
░▒▓▓▓▓▓▓▓▓▒▒▒▒▒▒▓▓▒▒▒▒░░░░░░ ░░░░░▒▒▒▒▒░░▒▒▒▒▒▒▒▒▒▒▒░
░▒▓▓▓▓▓▓▓▓▓▒▒▒▒▓▓▒▒░░░░░░ ░░░░▒▒▒▒░░▒▒▒▒▓▓▓▓▓▒▒░
░▒▓▓▓▓▓▓▓▒▒▓▓▓▒░░ ░░░░░▒▒▒░░ ░░▒▒░░░▒▓▓▓▓▓▒▒▒▒░
░▒▒▒▓▓▓▓▓▓▓▓▒░░ ░░▒▓▓▓▓▓▓▒░░ ░░▒▒▒▓▓▓▓▓▒▒▒▒░
░▒▒▓▓▓▓▒░ ░▓▓█▓▓▓▒▒▓▓▓▒░ ░▒▒▓▓▓▓▒▒▒▒░
▒▒▓▒▒▒░░░▒▓███████▓██▓░ ░▒▓▓▓▒░░
▒▒▒▒▒▒▒▒▓████▓▓████▓░ ░▒▒▒░
░▒▒▒▒▒▓▓▓██████▓▒▒░░░░
░▒▒▒▒▒▓▓▓▓▓▓▓▒▒▒▒░░
▒▒▓▓▓▓▓▓▓▓▓▒░░
░░▒▒▒▒▒▒░░
Is generated by the program with this uuencoded source:
begin 644 unicode_image.tar.xz
M_3=Z6%H```3FUK1&`@`A`18```!T+^6CX"?_!IM=`":8269IV=F-Y,!%M1>4
MZX(9LR,YMG1:D2XCM%DZ,0N4%>'\;I?0D"7/H:OI15<M6G@8HO/[&((J'=B.
MXY\G/[7D"Y)B$O)IC]DM9Y@^\4T?'9(.Z.4+7IDF/T0&\7`M5+G#=C!?(>(U
M+-C2%PZ!"(Q'Z_/D^%"[PVKX:A:OKH5WF?AQ=CD_AAS]<3<THTMC0S8FG\<A
MZ;A-_9H?)5S'YG5A?0WUQ7FR0+IS\0AEUYY9QFMY?"$);\U%_R0NK(ZZ/Y&J
MVA;@#O-P.6W8PW']0U"<S'NHB=.(/)OX[<1&UF@M8+GXPGVFQB_+K/WD01ZO
MO#+E!CK;^`V-WGH^?0V5M!IK[KR&]`IR<>6D+ONPJT6E\CZJ^KKZ,W?3O"2K
M!/GHQ&TDN';;P#UC;)+HRPH$`_8JM#ZV`\I6=,PO=U#S33IZ=R!K2IF]\1D@
M*@I6;)=1P[3ICJ%,C5VTH^%^^N7(`)5NO*-SG)Y`QA_WK>PA8;TJ+X2)EV?3
MI.G"[*>WWDZ7\Q_8`@,?X8C9YSMNGQ.S79!10SGB!PGY<3)L+A>\T4NE3RCH
M@$<!]40^I5;'[)@>$KCW3*:VMQ")"FQ!"L?^:Y5K)WM]*CV<",@L38:E&'G;
MOH/\?B8-H-/5$-+1`SZ2O6RHY>T@2+Q"LM7T32<P'/M;:&`9G%<:2]0W95K<
M\.;8"EQMW`_,%NF[4)3]`F-BE^_T2,`,VV:G?3TUJ"IH\A@>7Y8:?5[I8HAX
M/S[+K[U+U);"B>&TTWB[]4K_N9HNW6P5\,!8G[BS*%\3<$#RM"O`6C#I1>3W
ME/8D\";AO1[@@M<^$07W%./W$^^MXKWV/QE?(SEU3GAF"?/TXQ_1N>#/)-;8
M<MT]?:XBQTJ%9'[+D!X/9^"U.Y*2A(8A/AJ\!K^V\)9>''<,=_M*GQ;D4XU&
M0EL_A7==:2!F6+.18],W)'0*A8VHHT@F\^+U@*=PAW&]_UDA'O3HL,)67*56
M:4QHF]DC7*EMI@?8=/,8;'O4[2W^#V!$L.(O\+@E"I[>5AF7V].]DSR0?>4E
MX,HXX%S-A'V'+F)3(0_FX[O!VNO&D/BL`T<!(LJ,(@3!S8LJ[><9CHP*Z!1N
M*W=F7"R-"ZZ_7"_NQ:8M=&RU\7`Z`<J>2YY>W2B\R7!YX(:UE+7?[1HLZ6?B
M_$Z<[I3[+!'?["<`BV0HHVNVZ$?*E!CLQ,Q'U$5$IQ?#B#+P)WE/$$-*'U<E
M%+7[;;^I\)F++WU`\\BL@X5,*B+]OLWQ&=W!,3*4_5Z0'R\2N/\];P>]W2<(
M-7Z3$YL3&6;-"*NTT2_Z1P=4">JZT,0Y$Q`L;RU@2\X!6>NA6D:5#HOIP#H]
MH)2I8$WFSU,1M9OC73J.1T1-YWD[%EH?1E*H#MV/[+5HSKGU.-%-'Z)QI=$\
MV>24Q6&KMA*-=L#[#I[2'0$86N)&8E/==`F@,S#5,`)(-KDT6A9IK-/%6OA@
M@FI$$#7G>&.Z!8[?8:F==P>HX>WF&.?(9V6^WM`J[CVD`L]9<&\6P_U?*WN`
MLI*_M*H;SP58M&#X!>*U*^J*XO@UT"&SIGH1%(K-=7DN@=HD6`S2EET,60JV
MI_\%)%6Q_^3CW_5`HQ;G084_7J0'F9DDH*%`SY*.D1BP`"D_QO=5,F?$-HAG
M_FP7H+LUTX`^%F-[SV(C'N*+AXE=&!+'OT$)RYGQ/HX,L8W(D%G9J=P!6+*$
M)F20)=%>9ZI).Z0`I'T/OT#SUR_:(O0U1*2-:,\D0S52^NI?HL69"POCNH&X
M_HXLQZB3EL-Z<)4.!<<BDJX3H"Q`L'&"RLO/%]17EV.5R@/$,%GYE#U(,Z'.
M6#]?M?@0VYB%WU2-4E:Z&9RN,"SCQAYJ70='?0`L5JC6GG#:1BG]DBCY;N)<
M>[;>JU-P]W=*RQG_KX;[>Y-0O.>_BS[M!=3Y#98EA`S8J/\1S=Z..*RC^;+U
M!.(#>E-V^?_+/M323Q,+EM-95M%CT#G[XO0FH/`.&`__EU<3\=+#>7?FR*NY
M9MA;$1+KD8?V@Y8XE7`(*;.N\KEF1]T4!OYS1+%#*S9&[0-#E"FRGA^\L[^A
M\76@2CMV<J_S92KW%;UO$.=R!3!P]OD.WD@*ZE(.;>H)8L]IMC;<YHPNH343
MY,JGKBM7M:!:S]$UJ7Y-/A'>Z]^7LV*Y.]N\MN_#%%%>)IH_A_:G46]+E.M;
MUA@I>99IP916P;7A48N3VF+;&!__1Q<QF8AU`XF-LJ./^6J+@+JCLICOF=I-
M.U"KKV.._JR/;P(````4=99=LD1P/P`!MPV`4```*JOOO;'$9_L"``````19
!6@``
`
end
I’ve also seen various programs that use the Braille Unicode characters for higher-resolution bitmap rendering, like mapscii, and I suspect that someone’s probably written software to convert to that.


I was kind of interested in doing this for Unicode a while back, which has the potential to provide for a lot more possible characters and thus a lot more-accurate renditions. If you don’t care about real-time operation, I suspect that you can do this with off-the-shelf software by just writing a small amount of code to generate an image for each character — a shell script driving ImageMagick could do it — and then feeding it into photomosaic software, like metapixel.
The major limitation is that unless you’re just interested in doing this for the text-based aesthetic and are actually rendering and presenting an image to the end user — think something like Effulgence RPG, Warsim, Armoured Commander II, Armoured Commander, Cogmind, SanctuaryRPG, Cataclysm: Dark Days Ahead, Stone Story RPG, Roots of Harmony, and so forth — you can’t control the font that the thing is rendered in on the end user’s computer. And the accuracy of the rendering degrades the more the typeface used on an end user’s computer differs from your own.
It’d probably be possible to build some kind of system that does take into account the differences for different typefaces, scores characters higher based on checking for character similarity across different typefaces.
Note that there are also at least two existing libraries out there — what I can think of off the top of my head — that will do image-to-ASCII conversion — aalib and libcaca, the latter of which has color support. I also posted a tiny program some time back to generate images using the colored Unicode tiles, and I imagine that someone out there probably has a website that does the same thing.


There was a famous bug that made it into 95 and 98, a tick counter that caused the system to crash after about a month. It was in there so long because there were so many other bugs causing stability problems that it wasn’t obvious.
I will say that classic MacOS, which is what Apple was doing at the time, was also pretty unstable. Personal computer stability really improved in the early 2000s a lot. Mac OS X came out and Microsoft shifted consumers onto a Windows-NT-based OS.
EDIT:
https://www.cnet.com/culture/windows-may-crash-after-49-7-days/
A bizarre and probably obscure bug will crash some Windows computers after about a month and a half of use.
The problem, which affects both Microsoft Windows 95 and 98 operating systems, was confirmed by the company in an alert to its users last week.
“After exactly 49.7 days of continuous operation, your Windows 95-based computer may stop responding,” Microsoft warned its users, without much further explanation. The problem is apparently caused by a timing algorithm, according to the company.


My anecdotal experience is that Vista - while pretty - is a bit of a bloatfest regardless of what hardware you run it on.
I use Linux, so I haven’t personally run into it, but is that just because of the Aero interface stuff? IIRC a lot of that can be disabled.
I’m using Debian trixie on two systems with (newer) AMD hardware:
ROCm 7.0.1.70001-42~24.04 on an RX 7900 XTX
ROCm 7.0.2.70002-56~24.04 on an AMD AI Max 395+.