• Gsus4@mander.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 hours ago

    Turns out when you’re told to increase your output to replace 5 colleagues with LLMs…there is no time to find and fix all the bugs.

  • ctry21@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 hours ago

    Can’t sabotage what’s already broken. The rare time I’ve been asked to use it for a piece of work, the output is so shit and full of errors that it would be easier to have done it by hand as a human.

  • RunawayFixer@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    6 hours ago

    “intentionally using low-quality AI output in their work without fixing it”

    This reads like victim blaming or scapegoating. That ai company makes shoddy software that outputs faulty results, users output faulty results when using that software, and now the ai company blames the users for outputting faulty results. That some (but likely not all) users know that the results are faulty, doesn’t change that the software itself is faulty.

    • krispyavuz@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      Of course it’s victim blaming! The title is enough of a hint. We are faulty because we dont use an artificial mind as good as we can use our own! /s

  • it_depends_man@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    ·
    8 hours ago

    A new report

    BY THE AI COMPANY “WRITER”

    and research firm Workplace Intelligent found a massive portion of workers across the US, UK, and Europe are intentionally trying to sabotage their bosses’ AI initiatives.

    Please don’t spread obviously doctored “reports”.

  • greyscale@lemmy.grey.ooo
    link
    fedilink
    English
    arrow-up
    52
    ·
    9 hours ago

    Good.

    It is morally and ethically the right thing to do.

    Also, did you know it is ethically and morally correct to firebomb datacenters? They’re being used for structural violence, and are basically piñatas.

    • Bluegrass_Addict@lemmy.ca
      link
      fedilink
      English
      arrow-up
      30
      ·
      8 hours ago

      …workers admitted to sabotaging their company’s AI by entering proprietary info into public AI chatbots, using unapproved AI tools, or intentionally using low-quality AI output in their work without fixing it.

      tbh it just reads like people are just using it ai, not actually sabotaging it. lol it’s such trash

      • Monument@piefed.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 hours ago

        “We have poor customer data safeguards, confidently present subpar work as acceptable, and have failed to adequately train our intended users but would like you to believe it’s all the users fault.”

      • cabbage@piefed.social
        link
        fedilink
        English
        arrow-up
        22
        ·
        8 hours ago

        “workers admitted to sabotaging their company’s AI by […] intentionally using low-quality AI output in their work without fixing it”

        Lol. Sounds an awful lot like the company is sabotaging itself in this case.

      • greyscale@lemmy.grey.ooo
        link
        fedilink
        English
        arrow-up
        12
        ·
        8 hours ago

        The sabotage narrative did feel weak when I was listening to Natasha Bernal talking. Its probably not sabotage, its just their data is wank and the employees aren’t paid enough to care to fix it.

        • WanderingThoughts@europe.pub
          link
          fedilink
          English
          arrow-up
          6
          ·
          5 hours ago

          Just like just doing your job is quiet quitting, AI sabotage means not spending unpaid overtime to completely redo the slop.

  • T156@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    7 hours ago

    The categories that they used for “sabotage” (Entering proprietary information into a different AI, using unapproved chatbots, and using low-quality AI responses as-is) seem like they’re just put together so they can blame employees for sabotage for the failure of the AI rollout, rather than employers trying to wedge it onto a bad use case, or not rolling it out properly.

    The first two just seem like the company having issues with people going straight to ChatGPT, and using that as-is, and the third seems to be more people not really caring and using the AI output as required.

    None of that comes across as outright sabotage like the organisation or article the to imply. All three seem like reasonable end-points of telling people to use AI, and giving them metrics they need to meet, or a not-great interface, so they just go off and use a different AI thing, because it’s all AI, and basically the same thing, right?

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 hours ago

    sabotaging their company’s AI by entering proprietary info into public AI chatbots, using unapproved AI tools,

    This is counter-productive and can get you in big trouble IMO. I don’t even get what these are protesting.

    or intentionally using low-quality AI output in their work without fixing it.

    This is better and I think I would totally do this if management forced me to use AI. If they want to pretend using this thing is a better use of my time, I’ll give them what they want.

    Fortunately I am working for an administration that has had rather tame expectations for gen AI use till now. They’re basically just like “experiment if you want, be careful and use what works for you”. So I just keep doing what I always did.

    • theunknownmuncher@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      5 hours ago

      I don’t even get what these are protesting.

      It doesn’t make sense because the protest is an invention.

      or intentionally using low-quality AI output in their work without fixing it.

      Translated: “our software tool works poorly and produces bad output. If workers do not work to manually fix the output, then they are InTeNtIoNaLlY sAbOtAgInG our business. Responsibility should be on the workers to fix our product’s flaw.”

      • brsrklf@jlai.lu
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        4 hours ago

        That would certainly explain it.

        I guess the story they’re trying to push is “People intentionally use bad AI just to give officially supported, good AI a bad name!”. And that’s quite the ridiculous claim.