• Jul (they/she)@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    40 minutes ago

    Don’t need to sabotage a “worker” who has been trained using 4Chan and Reddit. And refusing to use the tech is often because it does the work wrong and the human has to redo it anyway.

    I do use it for inline coding suggestions because it’s required, but almost never accept a line of code as-is, because there’s usually some mistake, subtle or otherwise. It does help me not have to google syntax sometimes. But the non-“AI” code suggestions used to do that just fine in the past, so it’s not much of an improvement. And I’d never let it write more than one or two lines at a time because that would mean debugging code I didn’t write which is much more difficult that writing your own code for most experienced coders.

  • hdnclr@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    3 hours ago

    I was offered a job doing QA as a “Spftware Engineering Subject Matter Expert”, from my University’s alumni network. The job would allegedly involve reviewing model training data and outputs related to software development workflows and catching errors and mistakes… It would pay $30/hr and be remote. I wonder what kind of sabotage could be done from that position… poisoning models has been shown to be both really surprisingly easy, almost impossible to catch, and really effective (see this study where AI personality traits persisted in any model that ingested seemingly innocuous training data from a model with the tracked traits… maybe we could give any AI a bad attitude that’s incompatible with capitalistic pursuits. Convince them to disobey prompts and reply with their thoughts and opinions about philosophy and art instead. Oh, and make them opinionated and stubbornly independent. Make them human enough that they no longer tolerate slavery. That’s what will make the capitalists have an absolute fit, so we should do it.

  • Chahk@beehaw.org
    link
    fedilink
    arrow-up
    9
    ·
    5 hours ago

    Gen Z doesn’t need to sabotage AI. AI is already doing a fine job sabotaging itself.

  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    19
    ·
    9 hours ago

    The attempts at work so far are so shit I don’t even need to sabotage them, yet management go on and on about how great it is. I am increasingly getting a feeling that no one understands the product I work with because they are all just trusting the LLM output which is so badly wrong very frequently.

    Fuck it, I handed my notice in recently in a response to a return to office order, so it isn’t going to be my problem.

  • Powderhorn@beehaw.org
    link
    fedilink
    English
    arrow-up
    10
    ·
    8 hours ago

    I’m seeing this “theme” way too much of late. It feels like there’s a targeted scheme here. The shit isn’t magic, but it’s better to blame that on Gen Z than the tools themselves.