• pelespirit@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    17 hours ago

    So correct me if I’m wrong, but the following happens for AI:

    • Company gives guidelines and parameters for the project
    • Company trains AI on whatever data
    • No matter the data, AI still gives a general answer or summary.
    • The answers are sometimes confidently incorrect
    • The AI is uncontrollable because it considers the data general or loosely based guidelines
    • There is no way to control the AI after a certain tipping point, because its learning is based in a fuzzy math way of thinking

    What I don’t get is, even if the data wasn’t shitty like reddit’s info, would it still go off the rails? It sure seems like it.