• Captain Beyond@linkage.ds8.zone
    link
    fedilink
    arrow-up
    1
    ·
    23 minutes ago

    The silver lining of “AI” is that it’s a convenient excuse to be anti-user. It’s okay as long as you are “fighting the AI”

    I expect to be chugging verification cans in 2027

  • Alvaro@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    50
    ·
    edit-2
    11 hours ago

    Everytime this happens I only hear either

    • “We don’t know security so we will hide our shitty code” Or
    • “We want to make more money but here is an excuse”
  • Lemmchen@feddit.org
    link
    fedilink
    English
    arrow-up
    31
    ·
    edit-2
    11 hours ago

    Never heard of them, but they can fuck right off.

    Today, AI can be pointed at an open source codebase and systematically scan it for vulnerabilities.

    Well, then do that.

    It’s not a perfect solution, but we have to do everything we can to protect our users.

    All you do is shipping unaudited software, you cunts.

    • uuj8za@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 hours ago

      Today, AI can be pointed at an open source codebase and systematically scan it for vulnerabilities.

      Well, then do that.

      iknowrite? If these magical scanners can find all the bugs in your code… then why don’t they use these magical scanners to find all their bugs in their code!!! 😂

  • theherk@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    11 hours ago

    They don’t seem to realize that higher level languages help us understand the code. Language models will be similarly capable of reading the binaries they ship. So what they doing is hiding code from users, not machines.


    To clarify, I don’t mean right now. They haven’t been sufficiently trained on machine code and that lacks some semantic help. But the future they fear will have transformers just as capable with lower level code.