The worst thing is that when used for good AI is fantastic! Scientific progress with purpose built AI to find planets, predict the weather, and tons of pattern matching has been in use for decades with positive benefits!
Even LLMs can be a useful tool in the right situations where looking like words people would say but accuracy is NOT important.
The problem is trying to use LLMs to do everything and failing while running the tech industry, the environment, and soon the economy into the ground. They took something positive, ruined it and coopted the terminology while shoving it down everyone’s throats.
I’m not here to argue for or against LLMs in general, but self-hostable AI is a thing. Actually open AI is a thing.
A blanket statement saying about AI as a whole technology being good for fascism is about as useful as saying “roads are good for fascism” (they’re great for troop movement after all).
Self hosting an llm ain’t the same thing as self hosting nextcloud for your docs and calendar. Yes there are small models but their output is laughable
Small models are improving and becoming more capable. The quality of local LLMs is basically unbounded. The context size of local LLMs is bounded by hardware. So local LLMs can be very capable for small, self-contained tasks.
qwen 3.6 35b running locally:
Write a Python script that can pull weather data from public sources and provide the high and low temperature for the current day in Miami, FL.
Single shot. No tool/internet use, so it didn’t pull this script from elsewhere.
import requests
defget_miami_weather():
# Miami, FL coordinates
LATITUDE = 25.7617
LONGITUDE = -80.1918# Open-Meteo API URL (free, no API key required)
url = (
f"https://api.open-meteo.com/v1/forecast?"f"latitude={LATITUDE}&longitude={LONGITUDE}"f"&daily=temperature_2m_max,temperature_2m_min"f"&timezone=auto"
)
try:
response = requests.get(url, timeout=10)
response.raise_for_status() # Raises error for 4xx/5xx HTTP status codes
data = response.json()
# Index 0 corresponds to the current day
high_c = data["daily"]["temperature_2m_max"][0]
low_c = data["daily"]["temperature_2m_min"][0]
# Convert to Fahrenheit (commonly used in the US)
high_f = (high_c * 9/5) + 32
low_f = (low_c * 9/5) + 32print("🌤️ Miami, FL Weather for Today:")
print(f"High: {high_f:.1f}°F ({high_c:.1f}°C)")
print(f"Low: {low_f:.1f}°F ({low_c:.1f}°C)")
except requests.exceptions.HTTPError as http_err:
print(f"❌ HTTP error occurred: {http_err}")
except requests.exceptions.ConnectionError:
print("❌ Error: Could not connect to the weather API.")
except requests.exceptions.Timeout:
print("❌ Error: Request timed out.")
except requests.exceptions.RequestException as err:
print(f"❌ An error occurred: {err}")
except KeyError as key_err:
print(f"❌ Error parsing data: Missing expected key {key_err}")
except Exception as err:
print(f"❌ Unexpected error: {err}")
if __name__ == "__main__":
get_miami_weather()
I tried to keep the size and scope within something that would reasonably fit in a comment. Looks pretty decent to me, but I can’t write Python myself. Never learned. I double-checked the LAT & LON of Miami, and it’s spot on.
It did take 47 seconds, while a cloud LLM would probably take 5 or less.
All I’m saying is local LLM isn’t garbage and it is getting better all the time.
It’s a MacBook Pro. 36GB of ram. I am sure Macs have some kind of gpu and I understand it somehow combines GPU ram with system ram, but I don’t really know Mac hardware very well.
It’s beefy for a laptop, but the desktop I built for myself several years ago had 32 GB of ram and a GTX 1660, so I’m guessing they are similar in capability. I gave that to my daughter, so I can’t run a comparison right now.
Sorry, but I think the point about local AI not necessarily being evil is the tangent here.
The OP is about motherboard shortages, which is being driven by the big AI companies and is making hardware unaffordable for normal users
The top level reply to that is about how that’s bad because it removes the ability for people to be in control of their own computing
Then someone comes in, saying “yeah, but you can host your own AI so that it’s not evil so not all AI is bad”
Then someone points out that you can only host your AI if you can afford the hardware to do so which, as the OP and the comment you replied to pointed out, is getting really hard to do.
if you did not understand the comment from above it’s fine but splitting hairs like you are doing is silly (everybody knows it’s not 100 % of AI is 100% evil)…
your comment is exactly the same as when people say “guns don’t kill people, people kill people”… yes, we all know guns are not autonomously killing people, the point is that guns, as a tool, are remarkably good at doing something we do not want, which is to kill people
Not to go on a separate tangent, but that’s the entire point of guns. They are supposed to kill. That’s not meant to be some crazy conservative defense of them or opposition to regulating them. Just pointing out something that seems to get lost in conversations.
Correct… so when I tell you “guns DON’T kill people, people kill people” you are right to assume and I am just an idiot trying to jingle keys in front of you to distract you from the fact that guns do in fact kill people.
I’ve looked into self-hosted AI and decided it’s not worth the cost - both in terms of hardware and energy - when compared to the relative value to be gotten out of it. YMMV.
The models you can run on consumer hardware are still nowhere near the stuff that runs in corporate data centers. To stick with your metaphor, its like running a little steam engine at home while the big guys get to operate nuclear reactors…
The US government is already setting down the legal framework to make self hostable AI ilegal so good luck with that. Also self hostable AI is still being trained on stolen material so still fascist.
AI is an amazing tool for fascists.
Annihilate private access to computing, censor and rewrite all comms, destroy free software and the last remnants of education…
Every single decision made for evil.
And all these vendors who are locking themselves into one customer are about to learn why that’s a bad idea.
The worst thing is that when used for good AI is fantastic! Scientific progress with purpose built AI to find planets, predict the weather, and tons of pattern matching has been in use for decades with positive benefits!
Even LLMs can be a useful tool in the right situations where looking like words people would say but accuracy is NOT important.
The problem is trying to use LLMs to do everything and failing while running the tech industry, the environment, and soon the economy into the ground. They took something positive, ruined it and coopted the terminology while shoving it down everyone’s throats.
I’m not here to argue for or against LLMs in general, but self-hostable AI is a thing. Actually open AI is a thing.
A blanket statement saying about AI as a whole technology being good for fascism is about as useful as saying “roads are good for fascism” (they’re great for troop movement after all).
You can’t self host anything when the hardware is no longer affordable.
Self hosting an llm ain’t the same thing as self hosting nextcloud for your docs and calendar. Yes there are small models but their output is laughable
Small models are improving and becoming more capable. The quality of local LLMs is basically unbounded. The context size of local LLMs is bounded by hardware. So local LLMs can be very capable for small, self-contained tasks.
qwen 3.6 35b running locally:
Single shot. No tool/internet use, so it didn’t pull this script from elsewhere.
import requests def get_miami_weather(): # Miami, FL coordinates LATITUDE = 25.7617 LONGITUDE = -80.1918 # Open-Meteo API URL (free, no API key required) url = ( f"https://api.open-meteo.com/v1/forecast?" f"latitude={LATITUDE}&longitude={LONGITUDE}" f"&daily=temperature_2m_max,temperature_2m_min" f"&timezone=auto" ) try: response = requests.get(url, timeout=10) response.raise_for_status() # Raises error for 4xx/5xx HTTP status codes data = response.json() # Index 0 corresponds to the current day high_c = data["daily"]["temperature_2m_max"][0] low_c = data["daily"]["temperature_2m_min"][0] # Convert to Fahrenheit (commonly used in the US) high_f = (high_c * 9/5) + 32 low_f = (low_c * 9/5) + 32 print("🌤️ Miami, FL Weather for Today:") print(f"High: {high_f:.1f}°F ({high_c:.1f}°C)") print(f"Low: {low_f:.1f}°F ({low_c:.1f}°C)") except requests.exceptions.HTTPError as http_err: print(f"❌ HTTP error occurred: {http_err}") except requests.exceptions.ConnectionError: print("❌ Error: Could not connect to the weather API.") except requests.exceptions.Timeout: print("❌ Error: Request timed out.") except requests.exceptions.RequestException as err: print(f"❌ An error occurred: {err}") except KeyError as key_err: print(f"❌ Error parsing data: Missing expected key {key_err}") except Exception as err: print(f"❌ Unexpected error: {err}") if __name__ == "__main__": get_miami_weather()Output:
I tried to keep the size and scope within something that would reasonably fit in a comment. Looks pretty decent to me, but I can’t write Python myself. Never learned. I double-checked the LAT & LON of Miami, and it’s spot on.
It did take 47 seconds, while a cloud LLM would probably take 5 or less.
All I’m saying is local LLM isn’t garbage and it is getting better all the time.
That’s interesting.
How much ram did it use while running?
If you used a GPU, how much does it cost in today’s prices?
It’s a MacBook Pro. 36GB of ram. I am sure Macs have some kind of gpu and I understand it somehow combines GPU ram with system ram, but I don’t really know Mac hardware very well.
It’s beefy for a laptop, but the desktop I built for myself several years ago had 32 GB of ram and a GTX 1660, so I’m guessing they are similar in capability. I gave that to my daughter, so I can’t run a comparison right now.
Or available. Companies have pre-sold years worth of inventory to AI companies.
You see hot that’s tangential to what you’re replying to?
Your point is completely valid, but in another discussion.
Sorry, but I think the point about local AI not necessarily being evil is the tangent here.
The OP is about motherboard shortages, which is being driven by the big AI companies and is making hardware unaffordable for normal users
The top level reply to that is about how that’s bad because it removes the ability for people to be in control of their own computing
Then someone comes in, saying “yeah, but you can host your own AI so that it’s not evil so not all AI is bad”
Then someone points out that you can only host your AI if you can afford the hardware to do so which, as the OP and the comment you replied to pointed out, is getting really hard to do.
Only when you ignore what was literally the first premise and conclusion.
if you did not understand the comment from above it’s fine but splitting hairs like you are doing is silly (everybody knows it’s not 100 % of AI is 100% evil)…
your comment is exactly the same as when people say “guns don’t kill people, people kill people”… yes, we all know guns are not autonomously killing people, the point is that guns, as a tool, are remarkably good at doing something we do not want, which is to kill people
Not to go on a separate tangent, but that’s the entire point of guns. They are supposed to kill. That’s not meant to be some crazy conservative defense of them or opposition to regulating them. Just pointing out something that seems to get lost in conversations.
Correct… so when I tell you “guns DON’T kill people, people kill people” you are right to assume and I am just an idiot trying to jingle keys in front of you to distract you from the fact that guns do in fact kill people.
Corps want to privatize roads and make them all toll roads too
They are succeeding in my area.
I’ve looked into self-hosted AI and decided it’s not worth the cost - both in terms of hardware and energy - when compared to the relative value to be gotten out of it. YMMV.
Same, pretty much. It is possible though, which makes LLMs a more democratic technology than, say, nuclear reactors.
The models you can run on consumer hardware are still nowhere near the stuff that runs in corporate data centers. To stick with your metaphor, its like running a little steam engine at home while the big guys get to operate nuclear reactors…
Roads were also useful for random citizens and people who happened to be in the area.
LLMs are overwhelmingly more useful to bad actors.
The US government is already setting down the legal framework to make self hostable AI ilegal so good luck with that. Also self hostable AI is still being trained on stolen material so still fascist.
Not the best comparison given the interstate highways systems’ history of disproportionately targeting Black and Latino communities.
That’s kinda my point. Roads are a useful technology, but they can be used by fascists.
This article talks exactly about that: https://tante.cc/2026/04/21/ai-as-a-fascist-artifact/
I don’t know if spending unfathomable amounts of money buying up the entire global supply of computing products can be considered a “tool”.