Exactly, it’s weird the anti movement, AI is exceptional in so many ways. Maybe the ruling class don’t want the general population to have access to and like something so fundamentally useful and have created a smear campaign
So they have the power to run a smear campaign that would alter society’s opinion on AI—which is so very useful and everyone would love it otherwise—but they don’t have the power to influence the building of billions of dollars of infrastructure to power these data centers?
The people pushing this are the people that benefit from the general populace being dumb as shit. That’s why they’re pushing it. AI doesn’t make anyone more informed or intelligent, it’s an echo chamber.
AI is not a useful tool, it’s a lock-in subscription that chains you to those billionaires. Fighting against AI is fighting against control by billionaires.
Correct. Similar problem, different era. OSS collectivized the means of production for software dev to some extent. AI will make that meaningless and chain software development to capital again. That alone is a reason to fight it.
I mean…looms actually seem useful. My experience with large language models is that they’re only useful when the output doesn’t really matter. Like…they’re fine if you’re “searching” for things that aren’t really defined and you don’t really care about the answer (i.e. “what are the five trendiest coffeeshops in Barcelona that are likely to have english speaking staff?” it can’t actually know any of that…what’s “trendy” even mean? Whatever, who cares, go to a coffee shop on your vacation, have a nice time).
But when it matters you just cannot rely on them…They can’t be relied on to use the correct words when precision of language matters, they can’t do “research” or “analysis” in any meaningful sense…like maybe better than a sharp middle-schooler? But not as well as a dumb undergrad.
And I don’t see any reason, understanding what the technology is to think they’ll get better at those things. It’s predictive in nature. You know…like maybe it’ll go from 60% reliable to 90% reliable over the next hundred years because they’ll find some way to focus on high-quality and relevant training data, while still using gigantic training data to get the model up and running…? But since it’s fundamentally a predictive model (trying to predict what a good answer would look like), it’s never going to be able to actually be relied upon for answers to questions when it matters.
And idk what the cost would be when factoring in all the externalities…environmental destruction, energy consumption…hell, even the infrasound from data centers fucking up everyone’s brain…like…there’s just no way this makes any economic sense. Right now it’s all mega-subsidized, but when that comes to an end…is it gonna cost $10 per prompt on average? $50? Idk, but I know everyone using it now will not want to pay for it.
AI is a broad term; of course neural networks and machine learning have been important in a lot of research etc. That’s all great. LLMs…it’s all anyone wants to talk about (maybe image generation too) and it’s junk for any application that matters.
If looms could only make burlap, and the capitalists tried to make burlap underwear a thing, I think the luddites would be wise to say to the public “hey, don’t buy this crap…it’s uncomfortable!” Of course, in reality, auto-looms did a lot of the same stuff traditional weavers could do. I think pointing out that when techbros say LLMs output is great, pointing out that LLMs output is generally garbage is effective. Luddites couldn’t really say that the output was significantly inferior (or maybe it was and people didn’t notice…jesus I hope that’s not the case with this garbage!).
Maybe that’s what we disagree about. To me, the auto-looms are only making burlap and I don’t see any reason to think they’re going to get much better. And they’re lighting the planet on fire :P
I am not willing to capitulate to this kind of BS: “LLMs are very useful and they’re clearly here to stay.” I just think that’s horseshit. That’s what the capitalists who are selling them want you to think, but I genuinely believe if you ever look at it in a critical context you’ll see.
What about that “forcing” thing you’re talking about? Look around. You’re being forced with everything by corporations. Why would this new cool technology should be an exception? You’re forced to watch sport events, listen to modern music, wear some vogue clothes, kiss your beloved leader’s ass, hate those evil Cubans or Ukrainians (depending on who your owner is).
No, I do none of those things. You just sound like a resentful conformist. Maybe try thinking and doing for yourself, instead of thinking and doing as you’re told.
Better question is, when did you lose basic keyword-based searching skills? I know you may want your answers on a platter but realize that there’s value in manual searches. Searching for something with LLMs on the page that you’re on is questionable on so many levels.
There’s really not value in doing something harder, and if it was a one page thing that wouldn’t be an issue.
Using their example you could get an LLM to return you the correct page in some documentation, searching through an entire site based only on a concept of what feature set you’re looking for. Ctrl-F cannot do that.
YouTube storing shitillions of dickabytes of cat videos “costs” much more while being completely useless. But those are funny cat videos. Hands off of those videos. Yes?
If AI is just like a video data center, why did data center energy usage stay stable before AI? Why has data center energy usage doubled since 2010 if videos and AI are equivalent in energy usage?
You may think you’re being clever, but that is hardly a reasonable comparison while also ignoring the glaring corporate irresponsibility underlying both.
You’re screaming into the echo chamber, mate. Unless you’re so rabidly anti-AI you believe and spread one of a few comforting, imaginary narratives, you’ll be dog piled.
I’m staunchly critical of AI, but won’t pretend that it only consists of generative AI, that it still operates as poorly as it did years ago, nor that a disturbing percentage of the population either doesn’t care about or actually supports that shit, so I get my share of insults. Being pro-AI won’t get you much civility so set your expectations low. Unless you’re trolling. Then you’ve nailed it.
I think is people don’t like it they don’t need to use it. I’m ok with rules stating ai generated content must be labeled though. I wouldn’t even mind a toggle so it could just be turned off.
But it is tech that is here to stay. It is useful to many people.
I think is people don’t like it they don’t need to use it.
Tell that to the people living near new data centers who can’t get clean water and are being charged exorbitant rates for electricity. They have no say in the matter.
Useful to who? If you need an LLM to write just a basic e-mail/comment/caption you’re maybe … how do I say this nicely? Not that smart …
If you use an LLM as a search engine, same thing.
If you use an LLM as a psychologist, same damn thing.
And the majority of people are using it for those things. It’s just plain stupidity. I’m not saying there’s no use to AI, but right now it’s being used in a terrible way by people that have no use for it at all.
It’d be fine if that was the case. Right now if you don’t like it you’re still forced to read (and often review) AI generated rumblings, communicate with LLMs instead of humans when contacting support, accept AI-specific terms even you won’t use the AI part of a product, have data centers pollute your city and pay ridiculous amount of money for a stick of ram.
No idea why people do that. I suppose some people are too dumb to write code and some other people are too dumb to understand what programmers use LLMs for. What dumb people do best? Attentionwhoring, screaming and throwing hysterical tantrums.
How do dumb people get dumber? By letting AI do their work and thinking they’re saving time, while in reality, proven by multiple studies, they are not. I would recommend you do some research and read the studies, but you will probably just go to your sycophant AI agent for the research.
There’s basically no time saved, but people do get dumber because they’re not thinking critically themselves anymore.
This dumbness and lack of critical thinking is very apparent in your case.
I get writing boilerplate and unit tests can probably be done by software well enough, at least when supervised.
Ill be honest, that’s not even my real issue.
My real issue is that programming, devops, systems administration. All of these things are art forms, every bit of them. From high-level application architecture down to the tiniest details of implementation.
Like how much of a library you choose to include, what you name your variables, what type of loops you use to iterate through data. How you choose to format and comment your code.
Giving these choices to the machine is like the painter giving their brush to it.
Just like images generated by stable diffusion will never be worth their fully-human painted equivalent. So too will LLM-developed programs fail to hold that value.
For what its worth, this isnt new. I’ve held contempt for VC-worshipping developers who see programming as a means to an end far longer than LLMs have been used for serious work.
Or we can refuse to listen to anti-“AI” crazies.
Exactly, it’s weird the anti movement, AI is exceptional in so many ways. Maybe the ruling class don’t want the general population to have access to and like something so fundamentally useful and have created a smear campaign
Guys trying to shove AI into everything: oh nooo, pleeeaaase don’t use our product, we would haaaate it if you did that!
Worst conspiracy ever.
It’s not the product makers that are creating the smear campaign, it’s the people who don’t want the general populous to access power AI.
So they have the power to run a smear campaign that would alter society’s opinion on AI—which is so very useful and everyone would love it otherwise—but they don’t have the power to influence the building of billions of dollars of infrastructure to power these data centers?
The people pushing this are the people that benefit from the general populace being dumb as shit. That’s why they’re pushing it. AI doesn’t make anyone more informed or intelligent, it’s an echo chamber.
I’ve never seen this many downvotes on a comment. 👏
And eat the dick billionaire shoving at your face willingly? Because that’s what AI are, people are forced to used it because their boss demand it.
So maybe you should fight against dick-shoving billionaires, not against useful tools?
AI is not a useful tool, it’s a lock-in subscription that chains you to those billionaires. Fighting against AI is fighting against control by billionaires.
AI is the Loom.
You are the Luddite.
Big tech is the Workhouse Owner.
Correct. Similar problem, different era. OSS collectivized the means of production for software dev to some extent. AI will make that meaningless and chain software development to capital again. That alone is a reason to fight it.
I mean…looms actually seem useful. My experience with large language models is that they’re only useful when the output doesn’t really matter. Like…they’re fine if you’re “searching” for things that aren’t really defined and you don’t really care about the answer (i.e. “what are the five trendiest coffeeshops in Barcelona that are likely to have english speaking staff?” it can’t actually know any of that…what’s “trendy” even mean? Whatever, who cares, go to a coffee shop on your vacation, have a nice time).
But when it matters you just cannot rely on them…They can’t be relied on to use the correct words when precision of language matters, they can’t do “research” or “analysis” in any meaningful sense…like maybe better than a sharp middle-schooler? But not as well as a dumb undergrad.
And I don’t see any reason, understanding what the technology is to think they’ll get better at those things. It’s predictive in nature. You know…like maybe it’ll go from 60% reliable to 90% reliable over the next hundred years because they’ll find some way to focus on high-quality and relevant training data, while still using gigantic training data to get the model up and running…? But since it’s fundamentally a predictive model (trying to predict what a good answer would look like), it’s never going to be able to actually be relied upon for answers to questions when it matters.
And idk what the cost would be when factoring in all the externalities…environmental destruction, energy consumption…hell, even the infrasound from data centers fucking up everyone’s brain…like…there’s just no way this makes any economic sense. Right now it’s all mega-subsidized, but when that comes to an end…is it gonna cost $10 per prompt on average? $50? Idk, but I know everyone using it now will not want to pay for it.
Looms are bad. They don’t do silk. They can only do big blocks of material. Even so, lots of people want to use them.
But my point is that going around attacking AI will be as effective as the luddites destroying machinery.
The villains are the same in both cases. Capitalists.
AI is a broad term; of course neural networks and machine learning have been important in a lot of research etc. That’s all great. LLMs…it’s all anyone wants to talk about (maybe image generation too) and it’s junk for any application that matters.
If looms could only make burlap, and the capitalists tried to make burlap underwear a thing, I think the luddites would be wise to say to the public “hey, don’t buy this crap…it’s uncomfortable!” Of course, in reality, auto-looms did a lot of the same stuff traditional weavers could do. I think pointing out that when techbros say LLMs output is great, pointing out that LLMs output is generally garbage is effective. Luddites couldn’t really say that the output was significantly inferior (or maybe it was and people didn’t notice…jesus I hope that’s not the case with this garbage!).
Maybe that’s what we disagree about. To me, the auto-looms are only making burlap and I don’t see any reason to think they’re going to get much better. And they’re lighting the planet on fire :P
I am not willing to capitulate to this kind of BS: “LLMs are very useful and they’re clearly here to stay.” I just think that’s horseshit. That’s what the capitalists who are selling them want you to think, but I genuinely believe if you ever look at it in a critical context you’ll see.
How about allowing AI to be rolled out and adopted organically instead of trying to gavage it down everyone’s throats?
That’s the question to corporations, not to LLMs.
That would be true of any technology. Technology doesn’t have agency; my comment is clearly directed at corpos that are pushing it.
If they’re so useful, why are they being forced on everyone, including by making them part of performance reviews?
If they’re useful people will naturally use them.
And people do use them. Naturally.
What about that “forcing” thing you’re talking about? Look around. You’re being forced with everything by corporations. Why would this new cool technology should be an exception? You’re forced to watch sport events, listen to modern music, wear some vogue clothes, kiss your beloved leader’s ass, hate those evil Cubans or Ukrainians (depending on who your owner is).
temu tyler durden
The projection is strong in this one
Wow, you must be the weakest and the easiest to manipulate and brainwash human being on the planet.
I very often do push back against things that they try to force down everyone’s throat. AI is not exceptional in this regard.
No, I do none of those things. You just sound like a resentful conformist. Maybe try thinking and doing for yourself, instead of thinking and doing as you’re told.
So cool…
Nobody’s fighting against you guys. Why do sloperators have to take everything personally? Smh my head.
“AI Tools” describes both the product and the people who use them.
What “usefulness” do you get out of them?
They save my time tremendously while searching for something in documentation. Especially if I don’t know if it is actually there.
Are you not familiar with “ctrl+f”?
Didn’t know ctrl-f could parse natural language and not only rely on knowing the correct keyword. When did it gain that functionality?
Better question is, when did you lose basic keyword-based searching skills? I know you may want your answers on a platter but realize that there’s value in manual searches. Searching for something with LLMs on the page that you’re on is questionable on so many levels.
There’s really not value in doing something harder, and if it was a one page thing that wouldn’t be an issue.
Using their example you could get an LLM to return you the correct page in some documentation, searching through an entire site based only on a concept of what feature set you’re looking for. Ctrl-F cannot do that.
I’d that tool didn’t come at the destructive costs involved, AI would be a lot more palatable.
YouTube storing shitillions of dickabytes of cat videos “costs” much more while being completely useless. But those are funny cat videos. Hands off of those videos. Yes?
If AI is just like a video data center, why did data center energy usage stay stable before AI? Why has data center energy usage doubled since 2010 if videos and AI are equivalent in energy usage?
You may think you’re being clever, but that is hardly a reasonable comparison while also ignoring the glaring corporate irresponsibility underlying both.
If that’s what it takes to stop the excessive destruction caused by unregulated data center construction and operation, yes.
What “yes”?
Do you need to reread my comment instead of reacting to a single word of it?
You need to reread my comment.
I don’t care what you do but keep your hands off those videos. I need them for things.
They’re not useful tools.
You’re screaming into the echo chamber, mate. Unless you’re so rabidly anti-AI you believe and spread one of a few comforting, imaginary narratives, you’ll be dog piled.
I’m staunchly critical of AI, but won’t pretend that it only consists of generative AI, that it still operates as poorly as it did years ago, nor that a disturbing percentage of the population either doesn’t care about or actually supports that shit, so I get my share of insults. Being pro-AI won’t get you much civility so set your expectations low. Unless you’re trolling. Then you’ve nailed it.
I think is people don’t like it they don’t need to use it. I’m ok with rules stating ai generated content must be labeled though. I wouldn’t even mind a toggle so it could just be turned off.
But it is tech that is here to stay. It is useful to many people.
Tell that to the people living near new data centers who can’t get clean water and are being charged exorbitant rates for electricity. They have no say in the matter.
This is occuring all over the US, these issues are far from isolated incidents.
Useful to who? If you need an LLM to write just a basic e-mail/comment/caption you’re maybe … how do I say this nicely? Not that smart …
If you use an LLM as a search engine, same thing.
If you use an LLM as a psychologist, same damn thing.
And the majority of people are using it for those things. It’s just plain stupidity. I’m not saying there’s no use to AI, but right now it’s being used in a terrible way by people that have no use for it at all.
It’d be fine if that was the case. Right now if you don’t like it you’re still forced to read (and often review) AI generated rumblings, communicate with LLMs instead of humans when contacting support, accept AI-specific terms even you won’t use the AI part of a product, have data centers pollute your city and pay ridiculous amount of money for a stick of ram.
That would be wonderful if those anti-AI folk would stop using LLMs and switch their attention to something more constructive. But they can’t.
Imagine letting an entertainment product write your code for you. Why the fuck are you doing this if you don’t even like the act of programming?
No idea why people do that. I suppose some people are too dumb to write code and some other people are too dumb to understand what programmers use LLMs for. What dumb people do best? Attentionwhoring, screaming and throwing hysterical tantrums.
How do dumb people get dumber? By letting AI do their work and thinking they’re saving time, while in reality, proven by multiple studies, they are not. I would recommend you do some research and read the studies, but you will probably just go to your sycophant AI agent for the research.
There’s basically no time saved, but people do get dumber because they’re not thinking critically themselves anymore.
This dumbness and lack of critical thinking is very apparent in your case.
I get writing boilerplate and unit tests can probably be done by software well enough, at least when supervised.
Ill be honest, that’s not even my real issue.
My real issue is that programming, devops, systems administration. All of these things are art forms, every bit of them. From high-level application architecture down to the tiniest details of implementation.
Like how much of a library you choose to include, what you name your variables, what type of loops you use to iterate through data. How you choose to format and comment your code.
Giving these choices to the machine is like the painter giving their brush to it.
Just like images generated by stable diffusion will never be worth their fully-human painted equivalent. So too will LLM-developed programs fail to hold that value.
For what its worth, this isnt new. I’ve held contempt for VC-worshipping developers who see programming as a means to an end far longer than LLMs have been used for serious work.
🙄
The thing is people using this stuff are doing harm to the planet and society. So leaving them alone is not going to happen.