• voodooattack@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    21 hours ago

    This person is right. But I think the methods we use to train them are what’s fundamentally wrong. Brute-force learning? Randomised datasets past the coherence/comprehension threshold? And the rationale is that this is done for the sake of optimisation and the name of efficiency? I can see that overfitting is a problem, but did anyone look hard enough at this problem? Or did someone just jump a fence at the time and then everyone decided to follow along and roll with it because it “worked” and it somehow became the golden standard that nobody can question at this point?

    • VoterFrog@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      20 hours ago

      The generalized learning is usually just the first step. Coding LLMs typically go through more rounds of specialized learning afterwards in order to tune and focus it towards solving those types of problems. Then there’s RAG, MCP, and simulated reasoning which are technically not training methods but do further improve the relevance of the outputs. There’s a lot of ongoing work in this space still. We haven’t seen the standard even settle yet.

      • voodooattack@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        12 hours ago

        Yeah, but what I meant was: we took a wrong turn along the way, but now that it’s set in stone, sunk cost fallacy took over. We (as senior developers) are applying knowledge and approaches obtained through a trap we would absolutely caution and warn a junior against until the lesson sticks, because it IS a big deal.

        Reminds me of this gem:

        https://www.monkeyuser.com/2018/final-patch/

    • bitcrafter@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      21 hours ago

      The researchers in the academic field of machine learning who came up with LLMs are certainly aware of their limitations and are exploring other possibilities, but unfortunately what happened in industry is that people noticed that one particular approach was good enough to look impressive and then everyone jumped on that bandwagon.

      • voodooattack@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        12 hours ago

        That’s not the problem though. Because if I apply my perspective I see this:

        Someone took a shortcut because of an external time-crunch, left a comment about how this is a bad idea and how we should reimplement this properly later.

        But the code worked and was deployed in a production environment despite the warning, and at that specific point it transformed from being “abstract procedural logic” to being “business logic”.

  • Technus@lemmy.zip
    link
    fedilink
    arrow-up
    125
    ·
    2 days ago

    I’ve maintained for a while that LLMs don’t make you a more productive programmer, they just let you write bad code faster.

    90% of the job isn’t writing code anyway. Once I know what code I wanna write, banging it out is just pure catharsis.

    Glad to see there’s other programmers out there who actually take pride in their work.

    • Cyberflunk@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      your experience isnt other peoples experience. just because you can’t get results doesnt mean the trchnology is invalid, just your use of it.

      “skill issue” as the youngers say

      • Feyd@programming.dev
        link
        fedilink
        arrow-up
        10
        ·
        1 day ago

        It’s interesting that all the devs I already respected don’t use it or use it very sparingly and many of the devs I least respected sing it’s praises incessantly. Seems to me like “skill issue” is what leads to thinking this garbage is useful.

        • FizzyOrange@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          19 hours ago

          Everyone is talking past each other because there are so many different ways of using AI and so many things you can use it for. It works ok for some, it fails miserably for others.

          Lots of people only see one half of that and conclude “it’s shit” or “it’s amazing” based on an incomplete picture.

          The devs you respect probably aren’t working on crud apps and landing pages and little hacky Python scripts. They’re probably writing compilers and game engines or whatever. So of course it isn’t as useful for them.

          That doesn’t mean it doesn’t work for people mocking up a website or whatever.

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        I’d rather hone my skills at writing better, more intelligible code than spend that same time learning how to make LLMs output slightly less shit code.

        Whenever we don’t actively use and train our skills, they will inevitably atrophy. Something I think about quite often on this topic is Plato’s argument against writing. His view is that writing things down is “a recipe not for memory, but for reminder”, leading to a reduction in one’s capacity for recall and thinking. I don’t disagree with this, but where I differ is that I find it a worthwhile tradeoff when accounting for all the ways that writing increases my mental capacities.

        For me, weighing the tradeoff is the most important gauge of whether a given tool is worthwhile or not. And personally, using an LLM for coding is not worth it when considering what I gain Vs lose from prioritising that over growing my existing skills and knowledge

    • Dr. Wesker@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      39
      ·
      edit-2
      2 days ago

      It’s been my experience that the quality of code is greatly influenced by the quality of your project instructions file, and your prompt. And of course what model you’re using.

      I am not necessarily a proponent of AI, I just found myself being reassigned to a team that manages AI for developer use. Part of my responsibilities has been to research how to successfully and productively use the tech.

      • Technus@lemmy.zip
        link
        fedilink
        arrow-up
        30
        ·
        2 days ago

        But at a certain point, it seems like you spend more time babysitting and spoon-feeding the LLM than you do writing productive code.

        There’s a lot of busywork that I could see it being good for, like if you’re asked to generate 100 test cases for an API with a bunch of tiny variations, but that kind of work is inherently low value. And in most cases you’re probably better off using a tool designed for the job, like a fuzzer.

        • Dr. Wesker@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          17
          ·
          edit-2
          2 days ago

          But at a certain point, it seems like you spend more time babysitting and spoon-feeding the LLM than you do writing productive code.

          I’ve found it pretty effective to not babysit, but instead have the model iterate on it’s instructions file. If it did something wrong or unexpected, I explain what I wanted it to do, and ask it to update it’s project instructions to avoid the pitfall in future. It’s more akin to calm and positive reinforcement.

          Obviously YMMV. I am in charge of a large codebase of python cron automations, that interact with a handful of services and APIs. I’ve rolled a ~600 line instructions file, that has allowed me to pretty successfully use Claude to stand up from scratch full object-oriented clients, complete with dep injection, schema and contract data models, unit tests, etc.

          I do end up having to make stylistic tweaks, and sometimes reinforce things like DRY, but I actually enjoy that part.

          EDIT: Whenever I begin to feel like I’m babysitting, it’s usually due to context pollution and the best course is to start a fresh agent session.

  • codeinabox@programming.devOP
    link
    fedilink
    English
    arrow-up
    66
    ·
    2 days ago

    I use AI coding tools, and I often find them quite useful, but I completely agree with this statement:

    And if you think of LLMs as an extra teammate, there’s no fun in managing them either. Nurturing the personal growth of an LLM is an obvious waste of time.___

    At first I found AI coding tools like a junior developer, in that it will keep trying to solve the problem, and never give up or grow frustrated. However, I can’t teach an LLM, yes I can give it guard rails and detailed prompts, but it can’t learn in the same way a teammate can. It will always require supervision and review of its output. Whereas, I can teach a teammate new or different ways to do things, and over time their skills and knowledge will grow, as will my trust in them.

  • Cyberflunk@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    1 day ago

    Nurturing the personal growth of an LLM is an obvious waste of time.

    i think this is short sighted. engineers will spend years refineing nvim, tmux,zsh to be the tool they want. the same applies here. op is framing it like its a human, its a tool. learn the tool, understand why ot works the way it does. just like emacs or ripgrep or something.

    • BatmanAoD@programming.dev
      link
      fedilink
      arrow-up
      11
      ·
      1 day ago

      I think you’re misunderstanding that paragraph. It’s specifically explaining how LLMs are not like humans, and one way is that you can’t “nurture growth” in them the way you can for a human. That’s not analogous to refining your nvim config and habits.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    34
    ·
    edit-2
    2 days ago

    If you think of LLMs as an extra teammate, there’s no fun in managing them either. Nurturing the personal growth of an LLM is an obvious waste of time. Micromanaging them, watching to preempt slop and derailment, is frustrating and rage-inducing.

    Finetuning LLMs for niche tasks is fun. It’s explorative, creative, cumulitive, and scratches a ‘must optimize’ part of my brain. It feels like you’re actually building and personalizing something, and teaches you how they work and where they fail, like making any good program or tool. It feels you’re part of a niche ‘old internet’ hacking community, not in the maw of Big Tech.

    Using proprietary LLMs over APIs is indeed soul crushing. IMO this is why devs who have to use LLMs should strive to run finetunable, open weights models where they work, even if they aren’t as good as Claude Code.

    But I think most don’t know they exist. Or had a terrible experience with terrible ollama defaults, hence assume that must be what the open model ecosystem is like.

    • ExLisper@lemmy.curiana.net
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      What he’s talking about is teaching a person and watching them grow, become better engineer and move on to do great things not tweaking some settings in a tool so it works better. How do people not understand that?

    • BlameThePeacock@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      Improving your input, and the system message can also be part of that. There are multiple optimizations available for these systems that people aren’t really good at yet.

      It’s like watching Grandma google “Hi, I’d like a new shirt” back in the day and then having her complain that she’s getting absolutely terrible search results.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        13
        ·
        edit-2
        2 days ago

        Mmmmm. Pure “prompt engineering” feels soulless to me. And you have zero control over the endpoint, so changes on their end can break your prompt at any time.

        Messing with logprobs and raw completion syntax was fun, but the US proprietary models took that away. Even sampling is kind of restricted now, and primitive compared to what’s been developed in open source.

  • realitista@lemmus.org
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 days ago

    This is exactly how I feel about LLM’s. I will use them if I have to to get something done that would be time consuming or tedious. But I would never willingly sign up for a job where that’s all it is.

  • itkovian@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    2 days ago

    A simple, but succinct summary of the real cost of LLMs. Literally, everything human for something that is just a twisted reflection of the greed of the richest.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    Experts who enjoy doing [blank] the hard way don’t enjoy the tool that lets novices do [blank] at a junior level.

    Somehow this means the tool is completely worthless and nobody should ever use it.

      • Dr. Wesker@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        2 days ago

        This is extremely valid.

        The biggest reason I’m able to use LLMs efficiently and safely, is because of all my prior experience. I’m able to write up all the project guard rails, the expected architecture, call out gotchas, etc. These are the things that actually keep the output in spec (usually).

        If a junior hasn’t already manually established this knowledge and experience, much of the code that they’re going to produce with AI is gonna be crap with varying levels of deviation.

            • justOnePersistentKbinPlease@fedia.io
              link
              fedilink
              arrow-up
              2
              ·
              2 days ago

              They use it with heavy oversight from the senior devs. We discourage its use and teach them the very basic errors it always produces as a warning not to trust it.

              E.G. that ChatGPT will always dump all of the event handlers for a form in one massive method.

              We use it within the scope of things we already know about.

  • myfunnyaccountname@lemmy.zip
    link
    fedilink
    arrow-up
    7
    ·
    2 days ago

    If it allows to kick out code faster to meet whatever specs/acceptance criteria laid out before me, fine. The hell do I care if the code is good or bad. If it works, it works. My company doesn’t give af about me. I’m just a number. No matter how many “we are family” speeches they give. Or try to push the “we are all a team and will win”….we aren’t all a team. Why should I care more than “does it work”. As long as profits go up, the company is happy. They don’t care how good or pretty my code is.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 days ago

      Tell me again how you’ve never become the subject matter expert on something simply because you were around when it was built.

      Or had to overhaul a project due to a “post-live” requirements change a year later.

      I write “good enough” code for me, so I don’t want to take a can opener to my head when I inevitably get asked to change things later.

      It also lets me be lazier, as 9 times out of 10 I can get most of my code from a previous project and I already know it front to back. I get to fuck about and still get complex stuff out fast enough to argue for a raise.

      • myfunnyaccountname@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        Been the sme and completely architected and implemented the entire middleware server farm for my last company. First in ibm after taking it over from someone else that started it, just a here you go takeover. Then moving from an ibm shop to oracle, cause the vp wanted a gold star and wouldn’t listen to anyone. I left when they were moving to red hat when the next vp came in and wanted their gold star. Little over 400 servers. Been there done that.

      • Evotech@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 day ago

        Just making a whole stack in an hour is pretty fun. You can just have an idea and a couple hours later I can have a database, backend and frontend running in containers locally doing exactly what I wanted.

        That’s pretty fun

        Basically anything you want to make you can just make now, you don’t need to know anything about it before hand

  • Lung@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    Argument doesn’t check out. You can still manage people, and they can use whatever tools make them productive. Good understanding of the code & ability to pass PR reviews isn’t going anywhere, nor is programmer skill