Correct. Similar problem, different era. OSS collectivized the means of production for software dev to some extent. AI will make that meaningless and chain software development to capital again. That alone is a reason to fight it.
I mean…looms actually seem useful. My experience with large language models is that they’re only useful when the output doesn’t really matter. Like…they’re fine if you’re “searching” for things that aren’t really defined and you don’t really care about the answer (i.e. “what are the five trendiest coffeeshops in Barcelona that are likely to have english speaking staff?” it can’t actually know any of that…what’s “trendy” even mean? Whatever, who cares, go to a coffee shop on your vacation, have a nice time).
But when it matters you just cannot rely on them…They can’t be relied on to use the correct words when precision of language matters, they can’t do “research” or “analysis” in any meaningful sense…like maybe better than a sharp middle-schooler? But not as well as a dumb undergrad.
And I don’t see any reason, understanding what the technology is to think they’ll get better at those things. It’s predictive in nature. You know…like maybe it’ll go from 60% reliable to 90% reliable over the next hundred years because they’ll find some way to focus on high-quality and relevant training data, while still using gigantic training data to get the model up and running…? But since it’s fundamentally a predictive model (trying to predict what a good answer would look like), it’s never going to be able to actually be relied upon for answers to questions when it matters.
And idk what the cost would be when factoring in all the externalities…environmental destruction, energy consumption…hell, even the infrasound from data centers fucking up everyone’s brain…like…there’s just no way this makes any economic sense. Right now it’s all mega-subsidized, but when that comes to an end…is it gonna cost $10 per prompt on average? $50? Idk, but I know everyone using it now will not want to pay for it.
AI is a broad term; of course neural networks and machine learning have been important in a lot of research etc. That’s all great. LLMs…it’s all anyone wants to talk about (maybe image generation too) and it’s junk for any application that matters.
If looms could only make burlap, and the capitalists tried to make burlap underwear a thing, I think the luddites would be wise to say to the public “hey, don’t buy this crap…it’s uncomfortable!” Of course, in reality, auto-looms did a lot of the same stuff traditional weavers could do. I think pointing out that when techbros say LLMs output is great, pointing out that LLMs output is generally garbage is effective. Luddites couldn’t really say that the output was significantly inferior (or maybe it was and people didn’t notice…jesus I hope that’s not the case with this garbage!).
Maybe that’s what we disagree about. To me, the auto-looms are only making burlap and I don’t see any reason to think they’re going to get much better. And they’re lighting the planet on fire :P
I am not willing to capitulate to this kind of BS: “LLMs are very useful and they’re clearly here to stay.” I just think that’s horseshit. That’s what the capitalists who are selling them want you to think, but I genuinely believe if you ever look at it in a critical context you’ll see.
AI is the Loom.
You are the Luddite.
Big tech is the Workhouse Owner.
Correct. Similar problem, different era. OSS collectivized the means of production for software dev to some extent. AI will make that meaningless and chain software development to capital again. That alone is a reason to fight it.
I mean…looms actually seem useful. My experience with large language models is that they’re only useful when the output doesn’t really matter. Like…they’re fine if you’re “searching” for things that aren’t really defined and you don’t really care about the answer (i.e. “what are the five trendiest coffeeshops in Barcelona that are likely to have english speaking staff?” it can’t actually know any of that…what’s “trendy” even mean? Whatever, who cares, go to a coffee shop on your vacation, have a nice time).
But when it matters you just cannot rely on them…They can’t be relied on to use the correct words when precision of language matters, they can’t do “research” or “analysis” in any meaningful sense…like maybe better than a sharp middle-schooler? But not as well as a dumb undergrad.
And I don’t see any reason, understanding what the technology is to think they’ll get better at those things. It’s predictive in nature. You know…like maybe it’ll go from 60% reliable to 90% reliable over the next hundred years because they’ll find some way to focus on high-quality and relevant training data, while still using gigantic training data to get the model up and running…? But since it’s fundamentally a predictive model (trying to predict what a good answer would look like), it’s never going to be able to actually be relied upon for answers to questions when it matters.
And idk what the cost would be when factoring in all the externalities…environmental destruction, energy consumption…hell, even the infrasound from data centers fucking up everyone’s brain…like…there’s just no way this makes any economic sense. Right now it’s all mega-subsidized, but when that comes to an end…is it gonna cost $10 per prompt on average? $50? Idk, but I know everyone using it now will not want to pay for it.
Looms are bad. They don’t do silk. They can only do big blocks of material. Even so, lots of people want to use them.
But my point is that going around attacking AI will be as effective as the luddites destroying machinery.
The villains are the same in both cases. Capitalists.
AI is a broad term; of course neural networks and machine learning have been important in a lot of research etc. That’s all great. LLMs…it’s all anyone wants to talk about (maybe image generation too) and it’s junk for any application that matters.
If looms could only make burlap, and the capitalists tried to make burlap underwear a thing, I think the luddites would be wise to say to the public “hey, don’t buy this crap…it’s uncomfortable!” Of course, in reality, auto-looms did a lot of the same stuff traditional weavers could do. I think pointing out that when techbros say LLMs output is great, pointing out that LLMs output is generally garbage is effective. Luddites couldn’t really say that the output was significantly inferior (or maybe it was and people didn’t notice…jesus I hope that’s not the case with this garbage!).
Maybe that’s what we disagree about. To me, the auto-looms are only making burlap and I don’t see any reason to think they’re going to get much better. And they’re lighting the planet on fire :P
I am not willing to capitulate to this kind of BS: “LLMs are very useful and they’re clearly here to stay.” I just think that’s horseshit. That’s what the capitalists who are selling them want you to think, but I genuinely believe if you ever look at it in a critical context you’ll see.