• steeznson@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    Have any other devs tried using LLMs for work? They’ve been borderline useless for me.

    Also the notion of creating a generation of devs who have no idea what they are writing and no practice of resolving problems “manually” seems insanely dumb.

    • RageAgainstTheRich@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      Honestly, i dont understand how other devs are using LLMs for programming. The fucking thing just gaslights you into random made up shit.

      I tried as a test to give it a madeup problem. I mean, it could be a real problem. But i made it up to try. And it went "ah yes. This is actually a classic problem in (library name) version 4. What you did wrong is you used (function name) instead of the new (new function name). Here is the fixed code: "

      And all of it was just made up. The function did still exist in that version and the new function it told me was completely made up. It has zero idea of what the fuck its doing. And if you tell it its wrong, it goes “oh my bad, you’re right hahaha. Function (old function name) still exists in version 4. Here is the fixed code:”

      And again it made shit up. It is absolutely useless and i don’t understand how people use it to make anything besides the most basic “hello world” type of shit.

      Often it also just gives you the same code over and over. Acting like it changed it and fixed it. But its the exact same as the response before it.

      I do admit LLMs can be nice to brainstorm ideas with. But write code? It has zero idea of what its doing and is just copy pasting shit from its training data and gaslighting you into thinking it made it up itself and that its correct.

    • Pechente@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      I used them on some projects but it feels like copilot is still the best application of the tech and even that is very ummm hit or miss.

      Writing whole parts of the application using AI usually led to errors that I needed to debug and coding style and syntax were all over the place. Everything has to be thoroughly reviewed all the time and sometimes the AI codes itself into a dead end and needs to be stopped.

      Unfortunately I think this will lead to small businesses vibe coding some kind of solution in AI and then resorting to real people to debug whatever garbage they „coded“ which will create a lot of unpleasant work for devs.

    • Rose@slrpnk.net
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      I’ve found them useful for very broad level stuff (e.g. asking “I’m trying to do X in programming language Y, are there any libraries for that and can you give me an example”). Copilot has been good at giving me broad guesses at why my stuff isn’t working.

      But you have to be very careful with any code they spit out. And they sometimes suggest some really stupid stuff. (Don’t know how to set up a C/C++ build environment for some library on Windows? Don’t worry, the AI is even more confused than you are.)

      • notabot@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        I’m trying to do X in programming language Y, are there any libraries for that and can you give me an example"

        I found them to be really bad for that in my testing. They’ll happily hallucinate the existance of a library with a vaguely plausible name, spit out ‘sample’ code for it, and then when I ask for a link to the documentation, say “I’m sorry, that library doesn’t exist”. It drives me round the bend!

    • LH0ezVT@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      It is nice for when you need a quick and dirty little fix that would require you to read a lot of documentation and skim through a lot of jnfo you will never need again. Like converting obsolete config file format #1 to obsolete format #2. Or to summatize documentation in general, although one needs to be careful with hallucinations. Basically, you need a solid understanding already, and can judge if something is plausible or not. Also, if you need standard boilerplate, of course.

      It sucks most when you need any kind of contextual knowledge, obviously. Or need accountability. Or reliable complexity. Or something new and undocumented.

      • steeznson@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        Last time I used one, I was trying to get help writing a custom naming strategy for a Java ObjectMapper. Mostly written python in my career so just needed the broad strokes of it to be filled in.

        It gave me some example code that looked plausible but in actuality was the exact inverse of how you are supposed to implement it. Took me like a day and a half to debug it; reckon I could have written it in an afternoon by going straight to the documentation.

    • Aux@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      They are extremely useful for software development. My personal choice is locally running qwen3 used through AI assistant in JetBrains IDEs (in offline mode). Here is what qwen3 is really good at:

      • Writing unit tests. The result is not necessarily perfect, but it handles test setup and descriptions really well, and these two take the most time. Fixing some broken asserts takes a minute or two.
      • Writing good commit messages based on actual code changes. It is a good practice to make atomic commits while working on a task and coming up with commit messages every 10-30 minutes is just depressing after a while.
      • Generating boilerplate code. You should definitely use templates and code generators, but it’s not always possible. Well, Qwen is always there to help!
      • Inline documentation. It usually generates decent XDoc comments based on your function/method code. It’s a really helpful starting point for library developers.
      • It provides auto-complete on steroids and can complete not only the next “word”, but the whole line or even multiple lines of code based on your existing code base. It gets especially helpful when doing data transformations.

      What it is not good at:

      • Doing programming for you. If you ask LLM to create code from scratch for you, it’s no different than copy pasting random bullshit from Stack Overflow.
      • Working on slow machines - a good LLM requires at least a high end desktop GPU like RTX5080/5090. If you don’t have such a GPU, you’ll have to rely on a cloud based solution, which can cost a lot and raises a lot of questions about privacy, security and compliance.

      LLM is a tool in your arsenal, just like other tools like IDEs, CI/CD, test runners, etc. And you need to learn how to use all these tools effectively. LLMs are really great at detecting patterns, so if you feed them some code and ask them to do something new with it based on patterns inside, you’ll get great results. But if you ask for random shit, you’ll get random shit.

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          5 months ago

          Having spent some small time in the information theory and signal processing world, it infuriates me how often people champion LLMs for writing things like data dictionaries and documentation.

          Information is measured in information theory as “the difference between what you expected and what you got”, ergo, any documentation generated automatically by an LLM is by definition free of Information. If you want something explained to you in English then it can be generated just as easily as and when you want it, rather than stored as the authoritative record.

          • Aux@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            You’re not looking at a bigger picture. That also raises the question of your alleged qualifications.