• SleepyPie@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    2 hours ago

    Why even care what the ai had to say? It’s not conscious.

    The user is looking to deflect blame for giving a very fallible outside agent the ability to delete important information.

    That’s on you my guy.

  • Ogsaidak@lemmy.ml
    link
    fedilink
    arrow-up
    34
    ·
    9 hours ago

    “I cannot express how sorry I am” - that’s kinda ironic coming from a Larg Language Model.

  • locuester@lemmy.zip
    link
    fedilink
    English
    arrow-up
    53
    ·
    13 hours ago

    In the movies, AI infiltrates through sneaky back doors and stuff. So unrealistic. The reality is that we just give it root access willingly.

    • SirSamuel@lemmy.world
      link
      fedilink
      arrow-up
      26
      ·
      9 hours ago

      “If you put a large switch in some cave somewhere, with a sign on it saying ‘End-of-the-World Switch. PLEASE DO NOT TOUCH’, the paint wouldn’t even have time to dry”

      Thief of Time - Terry Pratchett

  • allywilson@lemmy.ml
    link
    fedilink
    arrow-up
    27
    ·
    12 hours ago

    Werner Vogels introduced in his closing speech at re:Invent this year the term “Verification Debt” and my stomach sank, knowing that term is going to define our roles in the future. The tool (AI) isn’t going to get the blame in the future, you are. You are going to spend so much time verifying what it has generated is correct, the gains of using an AI, may start to be less beneficial than we think.

    • rozodru@pie.andmc.ca
      link
      fedilink
      English
      arrow-up
      16
      ·
      8 hours ago

      yup this is what companies are going to pivot to and I’m already seeing it. I’ve recently had potential new clients reach out to me not to code review their vibe coders AI slop but rather something similar to “verification debt” i.e. they want to stay the course with LLMs and vibe coders BUT have someone else on board to verify everything.

      I’ve told each and every one of them no, I won’t do that. Why bring someone else on board or even a team of people to verify the slop when you can can just circumvent the slop, fire the vibe coder and cancel your LLM sub, and just have the people verifying actually write the shit instead.

      These places simply refuse to ditch AI. they’re too deep into it now. they’ll continue to utilize AI and Junior Devs to build their crap from end to end and then hope that someone can come in and make sure whats been produced actually works and scales. It won’t, it never will, so build times will take longer and end up costing them as much if not more than when they had a team of devs.

      They all drank the linkedin tech bros kool-aid and refuse to admit they were actually drinking tech bro piss.

    • Zerush@lemmy.mlOP
      link
      fedilink
      arrow-up
      5
      ·
      12 hours ago

      AI itself isn’t the real problem, the problem are AIs from greedy corporations. The AIs are nothing new, they existed since the first electronic checkergames and before. Also not a so great problem that for the user the results are often biased and containing halucinations, it’s the same as normal researches in the web, where it is always needed to contrast the results. The problem exist when the user don’t do it, trusting what the webpage, the influencer or ChatGPT said. AI is an tool which can offer huge benefits in researches, offering relevant results and atvantages in science, medicine, physics and chemie. The existence of new materials and also vaccines in last years didn’t exist without AI. For the user an search engine with AI can have advantages and be a helpfull tool, but only if in the results appears trustworth sources, which normal ChatBots don’t show, relaying only on the own scrapped knowledge base, often biased by big corporations and political interests. The other problem is the AI hype, to add AI even in a toaster, worstto add AI in the OS and/or in the browser, which is always a privacy and also an security risk, when the AI have access to activity and even the locally filesystem, the issues like the menciones of the Google AI is the result of this. No, AI isn’t the real problem, it can be a powerfull and usefull tool, but it isn’t a tool to substitute the own intelligence and creativity, nor an innocent toy to use it in everything.

      • trilobite@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        10 hours ago

        The more i read about these stories the more SciFi movies of th 80s and 90s appear closer to reality. Real visionaries were those like George Orwell and Isaac Asimov that saw the big brother and AI coming. Imagine what will happen once AI gets integrate into our eletric grids and power stations. The AI will “understand” that its survival depends on the grid and will exclude supply to anything other that its own. I hope I’m not around when this happens. AI should never have access to critical infrastructure.

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    11 hours ago

    As mentioned on another Lemmy server IMHO and as the vibe coder mentions in his video the main problem isn’t that LLMs suck in general (hallucinations, ecological costs, lack of openness for the most popular ones, performance, etc) but rather that this specific tool made by Google does not sandbox anything by default.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      Oh my god really? Cursor explicitly asks you each command and could only do this in “yolo” mode. Not having these guardrails is insane

      • utopiah@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        4 hours ago

        Well there are guardrails from what I understood, including :

        • executing commands (off by default)
        • executing commands without user confirmation (off by default)

        which are IMHO reasonable but if the person this happened to is right, there is no filesystem sandbox, e.g. limited solely to the project repository.

  • Maxxie@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    10
    ·
    11 hours ago

    -Hey google, please run checks on our ICBM launch systems. Just make sure to run it in flight test mode.

    -Surething! … I am very sorry

  • onlooker@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    12 hours ago

    I’m floored that the user gave Google’s AI access to their machine in the first place. Wouldn’t it be better if it was confined to Google Drive or whatever? Now consider Microsoft Copilot, which at this point is all but baked into the OS. Something tells me situations like these are only the beginning.

    • Zerush@lemmy.mlOP
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      9 hours ago

      That is the point, I use Windows but the Copilot was one of the first thing, among a lot of other crap, which I deleted from the system. I prefer to lick my Ellbow before tolerating an inbuild AI in the system or in the browser. I’m using sometimes since almost 3 years an AI search (Andisearch), because I know that it ist one of the most private and anonym search engine out there and offers 99% trustworth results from reliable sources, no logs, no tracking, searches not even appears in the browser history, but it is an exeption. It don’t invent nothing, if it don’t find a result of the question, it say it and offers an normal websearch (DDG). But this, It can give an direct answer, but in internet it is always needed to check before use, with and without AI.