• Sunsofold@lemmings.world
    link
    fedilink
    arrow-up
    6
    ·
    10 hours ago

    There’s a lot of ink spilled on ‘AI safety’ but I think the most basic regulation that could be implemented is that no model is allowed to output the word “I” and if it does, the model designer owes their local government the equivalent of the median annual income for each violation. There is no ‘I’ for an LLM.

    • Credibly_Human@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      6 hours ago

      Its this type of kneejerk reactionary opinion I think will ultimately let the worst of the worst AI companies win.

      Whether an LLM says I or not literally does not matter at all. Its not relevant to any of the problems with LLMs/generative AI.

      It doesn’t even approach discussing/satirizing a relevant issue with them.

      It’s basically satire of a strawman that thinks LLMs are closer to being people than anyone, even the most AI bro AI bro thinks they are.

      • Sunsofold@lemmings.world
        link
        fedilink
        arrow-up
        2
        ·
        5 hours ago

        No, it’s pretty much the opposite. As it stands, one of the biggest problems with ‘AI’ is when people perceive it as an entity saying something that has meaning. The phrasing of LLMs output as ‘I think…’ or ‘I am…’ makes it easier for people to assign meaning to the semi-random outputs because it suggests there is an individual whose thoughts are being verbalized. It’s part of the trick the AI bros are pulling to have that framing. Making the outputs harder to give the pretense of being sentient, I suspect, would make it less likely to be harmful to people who engage with it in a naive manner.

        • Credibly_Human@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          4 hours ago

          No, it’s pretty much the opposite. As it stands, one of the biggest problems with ‘AI’ is when people perceive it as an entity saying something that has meaning.

          This has to be the least informed take I have seen on anything ever. It literally dismisses all the most important issues with AI and pretends that the “real” problem (as if there is only one that matters) is about people misunderstanding it in a way I see no one doing.

          It’s clear to me you must be so deep into an anti AI bubble you have no idea how people who use AI think about it, how its used, why its used, or what the problems with it are.