• T156@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    8 hours ago

    The categories that they used for “sabotage” (Entering proprietary information into a different AI, using unapproved chatbots, and using low-quality AI responses as-is) seem like they’re just put together so they can blame employees for sabotage for the failure of the AI rollout, rather than employers trying to wedge it onto a bad use case, or not rolling it out properly.

    The first two just seem like the company having issues with people going straight to ChatGPT, and using that as-is, and the third seems to be more people not really caring and using the AI output as required.

    None of that comes across as outright sabotage like the organisation or article the to imply. All three seem like reasonable end-points of telling people to use AI, and giving them metrics they need to meet, or a not-great interface, so they just go off and use a different AI thing, because it’s all AI, and basically the same thing, right?