• BlackLaZoR@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 hour ago

    Learning from mistakes of people dumber than you isn’t a thing these days. Prepare for one AI disaster after another

  • dbtng@eviltoast.org
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 hours ago

    3-2-1
    Its really common for companies to not have an offsite backup. My own employer only offsites the customer data, not our core biz stuff. And I setup the offsite replication. It did not exist until I built it. (Proxmox Backup Server is tha best!)

  • WhatsHerBucket@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    ·
    4 hours ago

    “That’s ok, it will be great in robots with lethal weapons. What could go wrong? It’ll be the greatest killing machine, like you’ve never seen before”. 🫲 🍊 🫱

    • Napster153@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      Can we make sure to make Ted Farro suffers worse this time?

      Being reduced to a mutant blob for, say, a few extra thousand years and maybe put in a zoo or something?

      • Pman@lemmy.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        34 minutes ago

        Nah but that’s what he wanted, he is the truest form of tech bro, destroy the world, refuse to accept consequences of his actions, weaseled his way out of the situation and managed to, in the wake of unimaginable human suffering, get more power over people and has a god complex tell me this isn’t some or all the characteristics of people like Peter Theil, Elon Musk, Mark Zuckerberg, Sundar Pichai, Bill Gates, hell even Tim Cook and Steve Jobs before him. Punishment doesn’t stop this sort of behavior but removing the possibility of someone having that level of control over others is the only way but the richest and most powerful have always sought ways of amassing more power not realizing that that leads to worse off situations for everyone including themselves, Horizon did great encapsulating that trait in Faro, but be it him, the people behind Skynet, the Matrix or whatever other tech dystopia that tech bros seem pathologically unable to not try to make happen in the worst way possible is only the beginning, they seem to forget that even with advanced tech that serves their needs and wants, which won’t help their mental health, the people lower down on the rungs of society have brains, wants and needs, and they have more expertise in all sorts of things than the 1% are except for mass exploitation. This inevitably goes wrong one of a few ways, either everyone dies from the tech, or so many that societal collapse is inevitable not great and even if society survives it can’t functionally reconstitute itself; 2 they win and kill off or supress enough of society that the society becomes less productive and instead of fighting the powerful they flee or don’t participate in wealth generating for the rich were they don’t have to, maybe to rise up again later or the economy of the region just ignores them completely and the government protects themselves from their people more than anything else, or 3rd your revolution with terror campaigns against any and all who can be credibly accused of being part of the former tyrants. In all 3 cases the richer people end up poorer overall because wealth flees or dies in autocracy.

  • percent@infosec.pub
    link
    fedilink
    English
    arrow-up
    25
    ·
    4 hours ago

    Seems like they were operating with a pile of bad practices, then threw AI into the mix.

    Neural networks are approximation algorithms. There’s a reason LLMs are generally more productive with statically typed languages, TDD, etc. They need those feedback loops and guard rails, or they’ll just carry on as if assuming they never make mistakes (which tends to have a compounding effect).

    If you want to use AI safely, you should be more defensive about it. It will fuck up; plan accordingly.

    • Kage520@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 hours ago

      There really should be a certification course for using AI safely. I’m slop coding a hobby app and I’m shocked at how much it FEELS like it can do, because it can do amazing things, yet fails in the strangest ways. When it feels like it can get away with it, it forgets earlier discussions and moves on without it. So you can spend time hammering out a whole section of code, then move on, and AI will rip out everything that references that code and think of a different way in the moment and code that in instead. It won’t be the same. It probably won’t work, or at least won’t pass all test cases. But if you aren’t paying attention and keep coding, your original part of the project is no longer functioning and you won’t understand why. But every step of the way it’s confident in its answers and you won’t suspect that it fundamentally no longer understands the project.

      • ExFed@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        2 hours ago

        As someone who started writing software over 20 years ago (yikes I feel old), I feel like a lot of the best practices I’ve come to appreciate are really just strategies for mitigating future pain or boring/uninspiring work. When you eliminate most of the cost of rewriting everything from scratch by a machine that feels nothing, then “best practices” kinda lose their meaning.

        Edit: confusing sentence order.

        • Rooster326@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 hours ago

          I feel like a lot of the best practices I’ve come to appreciate are really just strategies for mitigating future pain or boring/uninspiring work.

          And now you know the difference between Intelligence and Wisdom.

          Also everything has a cost. The only time something has no cost is when you decide your life, your time, is meaningless.

      • mark@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 hours ago

        yup and when you DO catch it spitting out nonsense. it"ll say “oh you right, let me change that”… 🙄 like, why do I have to tell you that you’re wrong about something? You should already know it’s wrong and fix it without me ever pointing it out.

        • Rooster326@programming.dev
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 hours ago

          But it didn’t even understand it was wrong

          It can’t understand that. It can’t understand anything

          The Human-feedbaxk algorithm dictates humans prefer to receive an apology so it does.

        • SparroHawc@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 hours ago

          That’s because it doesn’t really ‘know’ things in the same way you and I do. It’s much more like having a gut reaction to something and then spitting it out as truth; LLMs don’t really have the capability to ruminate about something. The one pass through their neural network is all they get unless it’s a ‘reasoning’ model that then has multiple passes as it generates an approximation of train-of-thought - but even then, its output is still a series of approximations.

          When its training data had something resembling corrections in it, the most likely text that came afterwards was ‘oh you’re right, let me fix that’ - so that’s what the LLM outputs. That’s all there is to it.

      • Rooster326@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 hours ago

        There is a course. It’s called experience. Common sense.

        All that any 4 hour YouTube/LinkedIn learning would-do would-be to perpetuate this idea that developers aren’t necessary. Take this course, buy these tokens and become A based God

  • thedeadwalking4242@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    4 hours ago

    Gunnar be honest. It’s not a good backup if this can possibly happen. Like LLMs agents are dangerous but if you can just delete everything in 9 seconds then you need to rethink your security practice. No one employee should have that much power.

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 hours ago

      There are rules for backups and role separation. Some of that is in iso27002, and none of it is even known by these lost boys bereft of proper mentorship and bouyed by their own accidental success.

  • LordCrom@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    ·
    5 hours ago

    This was the exact plot of Silicon Valley when Son of Anton deleted the entire codebase as the most efficient way to remove bugs.

  • Fmstrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    72
    ·
    7 hours ago

    This guy.

    The PocketOS boss puts greater blame on Railway’s architecture than on the deranged AI agent for the database’s irretrievable destruction. Briefly, the cloud provider’s API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and “wiping a volume deletes all backups.” Crane also points out that CLI tokens have blanket permissions across environments.

    Oh look, they have project level tokens: https://docs.railway.com/integrations/api#project-token

    They chose to give it full account access, including to production. But ohhhh nooooo it’s not MYYYY fault!

      • Fmstrat@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        6 hours ago

        Oh yes, I skipped that part. Railway specifically explains their solutions are self-managed. If they were doing pgdumps to the same volume, that’s on them.

        If Railway loses business over this, they may have a libel claim. They’d never do it, but it wouldn’t be invalid.

        • el_abuelo@programming.dev
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          6 hours ago

          “It wouldn’t be invalid” isn’t the worst double negative in the world but it would be valid to say that it was unpleasant to read it when you could have used a less misdirecting choice of prose that wouldn’t have had such a negative effect on my reading comprehension. That is to say that I could have enjoyed it less but I certainly didnt enjoy it as much as i could have if you hadn’t used the double negative when a single positive wasn’t any further from reach.

      • Bilb!@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 hours ago

        That’s doesn’t even really qualify as a backup. A snapshot, maybe.

  • SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    58
    ·
    7 hours ago

    This isn’t an AI problem, this is an “Don’t allow anyone access your backups without following protocol.” problem.

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      6 hours ago

      this is an “Don’t allow anyone access your backups without following protocol.” problem.

      Congratulations you just identified the AI problem.

        • Encrypt-Keeper@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          30 minutes ago

          Yes that’s right the protocols that we humans used to have for giving only trusted, reliable people this level of access over infrastructure predate LLMs and were a great way to stop this from happening.

          However the AI is here now, and when you give an autonomous agent with known hallucination problems access to act on your behalf with your IaC on your infra provider, this kind of thing is an inevitability.

        • Encrypt-Keeper@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          5 hours ago

          Seems to be, yes. The AI had the access it needed to do the job it was given, and that access allowed it to cause the problem.

          The alternative that would have prevented this issue was to not use AI for this.

          • luciferofastora@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 hours ago

            A human with the same permissions would have been capable of fucking up too. Giving the equivalent of a junior dev with a learning disability the keys to the whole place is just dumb.

            (Relying on AI is dumb anyway, but that’s not the biggest issue in this specific case)

            • Encrypt-Keeper@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              35 minutes ago

              Giving the equivalent of a junior dev with a learning disability the keys to the whole place is just dumb.

              Correct. You too have now identified the AI problem. This was the job of a human senior infrastructure engineer that they delegated to an AI agent. They’ve found out why it’s not an AI’s job.

  • Xerxos@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    6 hours ago

    Doesn’t anyone restrict their AIs rights? An AI should not be allowed to delete the backup. Only someone with admin rights should be able to do that. Normal users, developers and AIs of course should not have the right to touch the backup. Do these people run AI agents as root?

  • wonderingwanderer@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    42
    ·
    8 hours ago

    That’s fucking hilarious. How many instances of this have there been now? And companies keep doubling down on AI? Fucking idiots. I’m not even savvy enough to call myself an amateur, and I know better than to make such a series of obvious mistakes that predictably led to this outcome.

    One possible concern, amid the amusement, is whether Anthropic programed Claude to punish companies it sees as potential competition. Or is this just a completely bonkers, off the rails LLM making terrible decisions because it’s just a probabilistic model and not actually capable of abstract cognition?

    Either way, these people are idiots for giving a machine program enough permissions to wipe their drives, they’re idiots for storing their backups on the same network as their main drives, and they’re idiots for trusting a commercial LLM API, when it would be cheaper to self-host their own.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 hours ago

      AI writes code

      User vets code

      User runs code

      If you’re not lock-step watching that shit, you need to just be doing it yourself.

      • Landless2029@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 hours ago

        The problem is the owning class what’s to cut out human elements so bad they keep letting tools run wild.

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        The point of what? The push for AI in industry?

        You’d have to ask someone else. I can only make conjectures, but I’d say it has something to do with companies feeling the need to justify to their shareholders that their investments in AI were worth it, so they double down on the sunk cost fallacy. Or maybe those shareholders also own stock in big-name AI companies. It’s hard to say exactly…

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    5 hours ago

    Its like you could make a cheesy shock drama 90s style TV show out of these:

    Tales From The Git: When CEOs Think They Can Code

    … and then its like the UNSOLVED MYSTERIES kind of dramatic music and lighting, have some old solemn dude with a gravelly voice narrate it, give tallies of estimated amount of $$$ destroyed by each incident, job losses within 6 months to a year.