• Buddahriffic@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 day ago

    It’s going to get even more dangerous over time because the LLMs are coming out of the uncanny valley but still have subtle problems, they are just getting harder to see. I just did a bunch of AI training at my company and on the one hand, some of it was already out of date, and on the other hand, the language used to describe it gave it a lot more credit than it deserved.

    People are already thinking it can do things it really can’t. Like think or analyze.

    I’ve been in this cycle since the first time I’ve interacted with an LLM or AI coding system where at first it looks impressive and I’m not sure what its limits are and then I slam into a wall that makes me realize in horror that it’s capabilities are far less than it seemed at first, then improvements come out and I’ll repeat the whole process because the previous wall seems to be dealt with and it becomes hard to argue with the people who are gung ho for AI.