Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.

  • HubertManne@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    This is why I would encourage people to use llms for something not important. like video games or interests. You likely will have enough knowledge around the things to catch the “hallucinations” and hopefully that will give you perspective on their use for more important things.