• pivot_root@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    5 hours ago

    Sometimes, I ask OpenClaw to generate some code

    https://github.com/lutris/lutris/discussions/6530#discussioncomment-16088355

    OpenClaw is extremely vulnerable to prompt injection. If the maintainer is using it to author code, you absolutely can not trust that the code is safe from exploits obfuscated as unintentional logic errors or bugs.

    There’s purity testing, and then there’s being cautious about running code made by someone who is doing something incredibly stupid and unsafe. This is the latter.

    • 9WhiteTeeth@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      You are assuming the author is being unsafe & not auditing code for very basic security issues.

      Let me present this angle, small teams of volunteer open source developers finally have a way to help ease the amount of code they produce, but you want them to continue doing all the work manually because AI hurts your feefees.

      Further, you are openly declaring you don’t trust the devs to audit their own code.

      If you can find a security vulnerability in the code (it is open source after all) I’ll cede, but otherwise, I think it is a good thing responsible AI use can help shoulder the work these folks do for our benefit.