• isleepinahammock@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    The real issue is that since any fingerprint that can be mandated for AI content must be algorithmically implemented, then that fingerprint can be algorithmically removed.

    For example, let’s say companies voluntarily choose or are forced to integrate text fingerprinting into LLM output. Automated AI writing detection tools already exist, but they’re not reliable. But in principle we could make the output of LLMs easy to identify. Maybe we force them to adopt subtle but highly unique patterns of word choice, punctuation, sentence structure, etc. Then if any student attempted to upload an LLM-generated essay to their course website, the system could with high accuracy flag it as AI generated.

    But…if those patterns are so clear and unambiguous, it also means they can be easily detected by third party tools. If one person can code ChatGPT to add special fingerprinting to the text ChatGPT creates, another person can create a program that you can paste ChatGPT text into that will remove that fingerprinting.