

There are way too many ways to use LLMs for programming to make a blanket statement


There are way too many ways to use LLMs for programming to make a blanket statement


Not a reaction, it has been in the works for a while.
Or kepano was lying and they built all this in a few days


Most likely they have the tooling to automatically scan repos on GitHub specifically


Now guide your parents through installing Jellyfin on their TV so they can connect to your instance.
That’s why people get Plex.
NordVPN is good for getting around geoblocks, not so much for privacy


Pick a mainstream social media site, find a post/video about the EU. If Ursula Ledoyen is in it, the comments will be full of comments Super Concerned that she wasn’t “properly elected”. Within 15-30 minutes of it being posted.
Perfectly organic and posted by normal concerned EU citizens. Yep yep.


“Unelected Ursula” is directly from the Russian disinformation playbook btw.


Sony and Microsoft both sell “elite” controllers for much more than that


So gifts and second hand items should magically work in a way they weren’t intended to?


We can do many things!
If this gets passed, then maybe you can run on a “stop killing IoT devices” platform and get that done - referring to the game decision that’s already a law.


Tape drives are expensive as fuck though.
Before the storage wars it was cheaper to just build a second shitty NAS and backup there
And even if you decide to go for modal editors, Helix is a lot better out of the box


Copilot is the harness, Claude and GPT are the models
Copilot is by far the worst harness of all the major players


Basically the local models don’t (and can’t) contain the full knowledge of the universe.
BUT they can call tools pretty well and if you give the harness the capability to search Wikipedia for example, it becomes a lot smarter


Qwen3.5 and Gemma4 are the best ones for tool calling that don’t need massive amounts of memory


Nothing you can run with affordable hardware. The SOTA stuff requires hundreds of gigabytes of memory - and not RAM, GPU memory.
But you can try with stuff like gpt-oss or qwen coder


You must type really fast then 😅
I personally read code a fuckton faster than I write it. And tests are for determining correctness, reading is just a part of it.


Because the simple tasks are boring as fuck?
If an LLM can generate 90% of a HTTP API correctly, why would you want to do it manually?
Why not just give it shellcheck and have it run that on every script it creates?