luciole (they/them)

Pronouns: they/them, he/she feels nice too.

Doesn’t know the lyrics. Just goes meow meow meow.

To bi bee flag bi emoji or to enby bee flag nb emoji that is the inclusive “or”

☐ Solid
☐ Gaseous
☑ Fluid

  • 3 Posts
  • 117 Comments
Joined 3 years ago
cake
Cake day: June 6th, 2023

help-circle








  • Fortunately software is much more than App ideas fishing for VC investments. A lot of us are building actual tools for nurses, teachers, technicians, artists, students, etc. We have to analyze these human beings’ role in society, their needs, their situation, which is different from merely preying on their attention span. Programming languages are still the most reliable way to specify how the software must behave. And once the software is done, it is merely born. It then lives through a steady flow of continuous adaptation until one day it dies as all things do. Downplaying the human condition is a mistake.


  • AI tools can generate functional, adequate, perfectly average code at a speed and cost that would have been unimaginable even five years ago. And like the outsourcing wave of the early 2000s, the economics are real and rational. Nobody is wrong for using these tools. The code they produce is often fine. It works. It passes tests. It might ship as-is.

    Not the first time I’ve read this kind of statement and I always struggle to reconcile this with my personal experience. I’m seriously doubting that I’m just not a “good enough prompter”. I know how to explain context from domain to tech and vice versa, that’s like, a good 20% of my job. I’d say that AI tools are good at producing code that already exists.

    The LLMs are an interface to a corpus of written material. They’ve never had a thought, a chat around the coffee machine, or any experience in the largest sense of the world. This is a hard barrier on any induction they may emulate.


  • But will the chat bot understand itself? It’s fun when you start questioning the LLM line by line about its own slop in the same session and it starts flagging all sorts of things it did wrong. Why didn’t it write it correctly in the first place? Or is the fix wrong? Who knows? People I guess. The model is fed on knowledge but whether it will activate in response to your prompt and be restored unadulterated is a coin toss.





  • It’s hard having two decades of experiences in a domain I suddenly find myself at odds with. Reading about others having the same qualm reassures me that I’m not going crazy. On the other I feel drawn further into an untenable contradictory position.

    Once in a while I give in. It’s typically when I’m faced with a non trivial problem I realize will take me days of learning before I have any chance of tackling it. My colleagues start suggesting it or share some slop to “help out”. So I think fuck it I’ll study later for now AI will solve it I need this ticket closed asap. I fire up a “decent” paid model and I start feeding it context. Every time it’s a nightmare. Hours of trying stuff that doesn’t stick, of questioning, of arguing with a chat bot, of wading through “here are the facts” and “good catch” and “I owe you an apology”. It’s not a shortcut it’s a fucking dead end. Then the bitter aftertaste can only be cleansed with cold hard time consuming actual learning.