Small models are improving and becoming more capable. The quality of local LLMs is basically unbounded. The context size of local LLMs is bounded by hardware. So local LLMs can be very capable for small, self-contained tasks.
qwen 3.6 35b running locally:
Write a Python script that can pull weather data from public sources and provide the high and low temperature for the current day in Miami, FL.
Single shot. No tool/internet use, so it didn’t pull this script from elsewhere.
import requests
defget_miami_weather():
# Miami, FL coordinates
LATITUDE = 25.7617
LONGITUDE = -80.1918# Open-Meteo API URL (free, no API key required)
url = (
f"https://api.open-meteo.com/v1/forecast?"f"latitude={LATITUDE}&longitude={LONGITUDE}"f"&daily=temperature_2m_max,temperature_2m_min"f"&timezone=auto"
)
try:
response = requests.get(url, timeout=10)
response.raise_for_status() # Raises error for 4xx/5xx HTTP status codes
data = response.json()
# Index 0 corresponds to the current day
high_c = data["daily"]["temperature_2m_max"][0]
low_c = data["daily"]["temperature_2m_min"][0]
# Convert to Fahrenheit (commonly used in the US)
high_f = (high_c * 9/5) + 32
low_f = (low_c * 9/5) + 32print("🌤️ Miami, FL Weather for Today:")
print(f"High: {high_f:.1f}°F ({high_c:.1f}°C)")
print(f"Low: {low_f:.1f}°F ({low_c:.1f}°C)")
except requests.exceptions.HTTPError as http_err:
print(f"❌ HTTP error occurred: {http_err}")
except requests.exceptions.ConnectionError:
print("❌ Error: Could not connect to the weather API.")
except requests.exceptions.Timeout:
print("❌ Error: Request timed out.")
except requests.exceptions.RequestException as err:
print(f"❌ An error occurred: {err}")
except KeyError as key_err:
print(f"❌ Error parsing data: Missing expected key {key_err}")
except Exception as err:
print(f"❌ Unexpected error: {err}")
if __name__ == "__main__":
get_miami_weather()
I tried to keep the size and scope within something that would reasonably fit in a comment. Looks pretty decent to me, but I can’t write Python myself. Never learned. I double-checked the LAT & LON of Miami, and it’s spot on.
It did take 47 seconds, while a cloud LLM would probably take 5 or less.
All I’m saying is local LLM isn’t garbage and it is getting better all the time.
It’s a MacBook Pro. 36GB of ram. I am sure Macs have some kind of gpu and I understand it somehow combines GPU ram with system ram, but I don’t really know Mac hardware very well.
It’s beefy for a laptop, but the desktop I built for myself several years ago had 32 GB of ram and a GTX 1660, so I’m guessing they are similar in capability. I gave that to my daughter, so I can’t run a comparison right now.
Small models are improving and becoming more capable. The quality of local LLMs is basically unbounded. The context size of local LLMs is bounded by hardware. So local LLMs can be very capable for small, self-contained tasks.
qwen 3.6 35b running locally:
Single shot. No tool/internet use, so it didn’t pull this script from elsewhere.
import requests def get_miami_weather(): # Miami, FL coordinates LATITUDE = 25.7617 LONGITUDE = -80.1918 # Open-Meteo API URL (free, no API key required) url = ( f"https://api.open-meteo.com/v1/forecast?" f"latitude={LATITUDE}&longitude={LONGITUDE}" f"&daily=temperature_2m_max,temperature_2m_min" f"&timezone=auto" ) try: response = requests.get(url, timeout=10) response.raise_for_status() # Raises error for 4xx/5xx HTTP status codes data = response.json() # Index 0 corresponds to the current day high_c = data["daily"]["temperature_2m_max"][0] low_c = data["daily"]["temperature_2m_min"][0] # Convert to Fahrenheit (commonly used in the US) high_f = (high_c * 9/5) + 32 low_f = (low_c * 9/5) + 32 print("🌤️ Miami, FL Weather for Today:") print(f"High: {high_f:.1f}°F ({high_c:.1f}°C)") print(f"Low: {low_f:.1f}°F ({low_c:.1f}°C)") except requests.exceptions.HTTPError as http_err: print(f"❌ HTTP error occurred: {http_err}") except requests.exceptions.ConnectionError: print("❌ Error: Could not connect to the weather API.") except requests.exceptions.Timeout: print("❌ Error: Request timed out.") except requests.exceptions.RequestException as err: print(f"❌ An error occurred: {err}") except KeyError as key_err: print(f"❌ Error parsing data: Missing expected key {key_err}") except Exception as err: print(f"❌ Unexpected error: {err}") if __name__ == "__main__": get_miami_weather()Output:
I tried to keep the size and scope within something that would reasonably fit in a comment. Looks pretty decent to me, but I can’t write Python myself. Never learned. I double-checked the LAT & LON of Miami, and it’s spot on.
It did take 47 seconds, while a cloud LLM would probably take 5 or less.
All I’m saying is local LLM isn’t garbage and it is getting better all the time.
That’s interesting.
How much ram did it use while running?
If you used a GPU, how much does it cost in today’s prices?
It’s a MacBook Pro. 36GB of ram. I am sure Macs have some kind of gpu and I understand it somehow combines GPU ram with system ram, but I don’t really know Mac hardware very well.
It’s beefy for a laptop, but the desktop I built for myself several years ago had 32 GB of ram and a GTX 1660, so I’m guessing they are similar in capability. I gave that to my daughter, so I can’t run a comparison right now.