From the maybe-we-should-have-done-that-to-start dept:

The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny.

The announced change comes after the company, which enables its users to create characters with which they can have open-ended conversations, faced tough questions over how these AI companions can affect teen and general mental health, including a lawsuit over a child’s suicide and a proposed bill that would ban minors from conversing with AI companions.

“We’re making these changes to our under-18 platform in light of the evolving landscape around AI and teens,” the company wrote in its announcement. “We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly.”

  • Gamma@beehaw.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    24 hours ago

    Drop the wild speculation, there is zero reason to play devil’s advocate. If you cared to do any reading there are a myriad of examples of this company’s llms pushing harmful behavior.

    Yes, there are probably other factors. There always are. It might not be what you meant, but you are saying that the companies selling these products should get off for free because they “would’ve done it anyway”

    • thingsiplay@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      24 hours ago

      but you are saying that the companies selling these products should get off for free because they “would’ve done it anyway”

      I am not saying that. Did you not read the last part of my reply?

      • Gamma@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        23 hours ago

        I did, it was full of speculation based on something you admitted you had no idea about

        • thingsiplay@beehaw.org
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          23 hours ago

          It is not just speculation, it is a warning not to just believe alleged accusations. We saw this a lot of times with politicians too too, while pointing to Ai to hide their real problems. So I ask you, have you prove that all of the accusations are true that the kid died because of the Ai and there the kid had no suicidal problems before?

          But yes, its easy to say “you have no clue” instead of coming up with facts. Its easy this way to point with the finger and believe what you want to believe. Plus I said if its true at all, then I am for regulation. You instead ignore all of my points and say “you have no clue”. I wonder if you have any clue what you are talking about.

          Edit: And then you put stuff in my mouth I did not say at all. Just delusional. Believe what you want then and ignore real problems. Not worth my time here.