• Etterra@discuss.online
    link
    fedilink
    English
    arrow-up
    2
    ·
    48 minutes ago

    So they have software that sometimes decides to lobotomize itself? Am I interpreting that right?

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    2 hours ago

    How the fuck does a tech company as big as Amazon have this running in production and not a test/staging environment? This is the kind of mistake a platform run by one person makes and even then they probably won’t make it again.

    Huh, something tells me that working in FAANG doesn’t actually make you a better engineer, they just have more money to not throw at R&D which somehow makes you feel superior to every other engineer 🤔

    • pineapple@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      27 minutes ago

      They do, but they have another AI agent in charge of reviewing and submitting the code for production.

    • The problem isn’t necessarily the workers, the problem is management. It’s an extension of class warfare. Copying an old comment of mine

      I left one of the big tech companies this year. AI perverted absolutely everything. The only thing worse than vibe code is having to maintain someone else’s vibe code on a codebase you spent the last 7 years nursing. Vibe code is absolute trash, but it’s management’s shiny new toy so they make it everyone’s problem.

      Expectations changed because of it. A project that would normally take 3 months to plan and implement was expected in a week, quality be damned. We racked up over 500 bugs in a year for a different unreleased application. I would get deadlines in the middle of my vacation days.

      And I haven’t even gotten to the moral and humanitarian issues

  • themaninblack@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    8 hours ago

    Anyone who has attended a presentation from any Amazon employee recently, and especially any “Senior Developer Advocate” presentation, understands the extreme push to get people to adopt AI coming internally from AWS.

    This is a shock to nobody who is in the know.

  • Victor@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    8 hours ago

    When will people stop trusting AI and double check its findings before acting? My god.

    Us pre-Bazzite Linux users are used to the age old adage, “don’t execute a command in the terminal you found online or were told to run, unless you understand fully what it does.”

    Same with AI, how are people working at AWS with their insane salaries not able to double check these things.

    AI should enlighten us about things we should be able to confirm. Not guide our decision-making completely.

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 hours ago

      “don’t execute a command in the terminal you found online or were told to run, unless you understand fully what it does.”

      We should start littering the internet with bad commands with all sorts of comments saying it works amazingly for its purpose so the AI will keep destroying things if let to run unchecked.

      • I Cast Fist@programming.dev
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        2 hours ago

        Last time I had a load balancing error with my nginx configuration, I did the following steps to fix it all:

        1. ssh into the machine
        2. alias fixit='rm -rf /var/www'
        3. alias restartnginx='rm -rf /etc'
        4. alias balance='rm -rf /usr'
        5. sudo fixit
        6. sudo restartnginx
        7. sudo balance
        8. ls -lah

        It’s been up and running nonstop for over 3 years now.

  • balsoft@lemmy.ml
    link
    fedilink
    arrow-up
    18
    ·
    10 hours ago

    Soo, they piped a probabilistic token predictor straight into a root console of a customer-facing service, and it only caused an outage twice so far? They should consider themselves lucky.

  • MSBBritain@lemmy.world
    link
    fedilink
    arrow-up
    41
    ·
    22 hours ago

    I don’t have hard evidence for this (might try and find some at some point though), but I feel like outages have become progressively more common in the last 4-5ish years.

    Feels like every time the AI tools “get better” there’s an increase and no one gives a shit. Like, what the fuck? When did stability and reliability become so irrelevant to people?

    Hell, GitHub might as well just close up shop with the amount of outages it’s had recently! I get that the bubble is a bubble but how has AI not cost companies enough in outages to show it’s a waste???

    • Cysio@lemmygrad.ml
      link
      fedilink
      arrow-up
      1
      ·
      3 hours ago

      Feels like every time the AI tools “get better” there’s an increase and no one gives a shit. Like, what the fuck? When did stability and reliability become so irrelevant to people?

      I wouldn’t be surprised if, between forced RTO, layoffs, and general unpleasantness, some of the tech workers quietly sabotaged the services through sheer negligence.

    • Matty_r@programming.dev
      link
      fedilink
      arrow-up
      26
      ·
      21 hours ago

      Could be due to the prolific centralisation of major infrastructure and services. Also, people just keep paying regardless of poor stability due to vendor lock-in.

  • db2@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    22 hours ago

    I hope that leopard is getting plenty of fiber with all those faces lately.