I don’t think File Explorer on Windows uses fork() to copy files? If it does, that’s insane. I don’t think git calls fork per-file or anything either, does it?
I don’t think File Explorer on Windows uses fork() to copy files? If it does, that’s insane. I don’t think git calls fork per-file or anything either, does it?


This sounds like it takes away a huge amount of creative freedom from the writers if the AI is specifying the framework. It’d be like letting the AI write the plot, but then having real writers fill in details along the way, which sounds like a good way to have the story go nowhere interesting.
I’m not a writer, but if I was to apply this strategy to programming, which I am familiar with, it’d be like letting the AI decide what all the features are, and then I’d have to go and build them. Considering more than half my job is stuff other than actually writing code, this seems overly reductive, and underestimates how much human experience matters in deciding a framework and direction.
I fully blame this on NTFS being terrible with metadata and small files. I’m sure everyone’s tried copying/moving/deleting a big folder with 1000s of small files before and the transfer rate goes to nearly 0…
On the bright side, you’re getting paid to wait around
( /s because I know the feeling, and it’s just slow enough you can’t step away and do something else)


What improvements have there been in the previous 6 months? From what I’ve seen the AI is still spewing the same 3/10 slop it has since 2021, with maybe one or two improvements bringing it up from 2/10. I’ve heard several people say some newer/bigger models actually got worse at certain tasks, and clean training data is pretty much dried up to even train more models.
I just don’t see any world where scaling up the compute and power usage is going to suddenly improve the quality orders of magnitude. By design LLMs are programmed to output the most statistically likely response, but almost by definition is going to be the most average, bland response possible.


This is based on the assumption that the AI output is any good, but the actual game devs and writers are saying otherwise.
If the game is too big for writers to finish on their own, they’re not going to have time to read and fix everything wrong with the AI output either. This is how you get an empty, soulless game, not Balders Gate 3.
Legitimately it is a winning strategy: https://www.history.com/articles/us-invasion-of-panama-noriega


I don’t think it really matters how old the target is. Generating nude images of real people without their consent is fucked up no matter how old anyone involved is.


“A computer can never be held accountable, therefore a computer must never make a management decision.”
– IBM Training Manual, 1979
We’re going so backwards…


Well, it’s physically impossible to capture more energy from burning hydrogen and oxygen than it takes to separate it. Combustion engines are only something like 30-40% efficient in ideal operating conditions.
Building a repairable car on the other hand is very much possible.


The diminishing returns are kind of insane if you compare the performance and hardware requirements of a 7b and 100b model. In some cases the smaller model can even perform better because it’s more focused and won’t be as subtle about its hallucinations.
Something is going to have to fundamentally change before we see any big improvements, because I don’t see scaling it up further ever producing AGI or even solving any of the hallucinations/ logic errors it makes.
In some ways it’s a bit like the Crypto blockchain speculators saying it’s going to change the world. But in reality the vast majority of applications proposed would have been better implemented with a simple centralized database.
Check again. Going from 600:1 to 60:1
On the bright side, after 10 years of doing it, you might improve the ratio to 1 hour of feeling like an idiot and 1 minute of feeling like a genius.


It’s a struggle even finding the manual these days if you don’t already know where it is / what it’s called. I was searching about an issue with my car recently and like 90% of the results are generic AI-generated “How to fix ______” with no actual information specific to the car I’m searching for.


It must be hard to admit he spent billions on a slop machine. Sunk cost fallacy is probably one of many things they’re fighting.


AI Company: We added guardrails!
The guardrails:



Well on the bright side, maybe in a few years when people search for “office software” they’ll be directed to libreoffice instead of Microsoft
“High Quality Audio” in terms of the sample rate and bit depth, but considering the quality of most of these DAC/ADCs you get integrated in cables like this, I somehow doubt the data rate is actually the limiting factor on quality.
Personally I can’t tell the difference between 192kHz and 96kHz samples rates, or 16bit and 24 bit either (maybe a young kid with perfect ears could, but they’ll probably also notice background noise due to most of these using unfiltered USB power). The dongle manufacturers seem to care more about the marketing value of bigger numbers than actual usability.
My experience has been that using AI only accelerates this process, because the AI has no concept of what good architecture is or how to reduce entropy. Unless you can one-shot the entire architecture, it’s going to immediately go off the rails. And if the architecture was that simple to begin with, there really wasn’t much value in the AI in the first place.