• Bazoogle@lemmy.world
      link
      fedilink
      arrow-up
      24
      ·
      2 days ago

      There are appropriate and legitimate use cases for AI, especially when locally hosted. Tech/programming is one of the few. The problem is when its shoved in everyones face for everything and all the data goes to tech conglomerates

      • Mwa@thelemmy.club
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        21 hours ago

        agreed, or even using something like Adobe Firefly.(it only trains on Public domain images)

      • ell1e@leminal.space
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        2 days ago

        Some of us respectfully disagree with LLMs for programming being “appropriate and legitimate”, at least if that involves generating code and not just locating bugs.

        Local LLMs retain significant issues like the one shown in this clip: https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567 Unless your model uses 100% properly licensed training data which no code LLM I have found appears to be doing.

        • msage@programming.dev
          link
          fedilink
          arrow-up
          5
          ·
          1 day ago

          Locating bugs is one of the most important tasks in programming, and if devs can’t do that, not are willing to learn to do so, they are fucked.

          There’s no other way of saying it. Can’t wait for the AI bubble to pop.

          • Bazoogle@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            15 hours ago

            You are using current AI as your baseline. There will come a point where writing code will mean there being zero bugs or vulnerabilities. Humans cannot do that. AI will, whether we want it or not, one day be able to. Idk if we are talk 10 years or 40 years, but it will happen.

          • ell1e@leminal.space
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            22 hours ago

            LLMs can sometimes point out potential trouble spots, which is also one of the uses that may avoid injecting problematic code (if the LLM is prevented from suggesting a fix). But sadly, that doesn’t seem the type of use KDE is currently limiting themselves to.