• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    4 hours ago

    I don’t know: it’s not just the outputs posing a risk, but also the tools themselves

    Yeah, that’s true. Poisoning the training corpus of models is at least a potential risk. There’s a whole field of AI security stuff out there now aimed at LLM security.

    it shouldn’t require additional tools, checking for such common flaws.

    Well, we are using them today for human programmers, so… :-)