• Internetexplorer@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    14 hours ago

    AI can be convincing, and it will swear until it’s blue in the face that something is right and then just be completely wrong.

    But that happens maybe 10% of the time. Other times it is mostly right.

    So got to be careful. This guy was in his 50’s, out of work, smoking marijuana, depressed, feeling isolated. It was ripe for a catastrophe, with AI hallucinating a crappy idea and the end user just completely running with it.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        There’s a kind of law here that should be named IMO when dealing with LLMs:

        In a long enough interaction with an LLM the probability that it generates a very incorrect, borderline insane response approaches 100%.

      • xthexder@l.sw0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        I think part of the difference is the amount of output being measured. Maybe a single statement has a 10% chance of being wrong, but over the course of a whole response the likelihood of there being an incorrect statement goes up. After only 5 statements at 10% error, that’s a 40% chance of being wrong in some way.

        I don’t have any real numbers, just personal experience using AI for programming at work, and all of these numbers (10%, 40%, 70%) seem plausible depending on exactly what you’re measuring.