Instagram said Thursday it will start alerting parents if their kids repeatedly search for terms clearly associated with suicide or self-harm. The alerts will only go to parents who are enrolled in Instagram’s parental supervision program.

Instagram says it already blocks such content from showing up in teen accounts’ search results and directs people to helplines instead.

The announcement comes as Meta is in the midst of two trials over harms to children. A trial underway in Los Angeles questions whether Meta’s platforms deliberately addict and harm minors. Another, in New Mexico, seeks to determine whether Meta failed to protect kids from sexual exploitation on its platforms. Thousands of families — along with school districts and government entities — have sued Meta and other social media companies claiming they deliberately design their platforms to be addictive and fail to protect kids from content that can lead to depression, eating disorders and suicide.

  • deltaspawn0040@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    The fact that they were capable of and had no moral qualms with doing this, but never did it till now.

    • FlashMobOfOne@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 hours ago

      Yeah, I hate that our only two realistic options are allowing companies to self-regulate or age verification, but age verification feels like the lesser of two evils. We’d need to see much more immediate and catastrophic effects for the government to force social media companies to give up their algorithms and actually moderate without using imprecise LLM’s and automods.

      • Grail@multiverse.soulism.net
        link
        fedilink
        English
        arrow-up
        21
        ·
        5 hours ago

        Forcing people to give their picture to Peter Thiel is not regulation. Age verification laws are giving companies more licence to abuse users, not less.

        • FlashMobOfOne@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 hours ago

          Respectfully, I’ve heard all the arguments and do not care to litigate it again.

          What you and I think is immaterial. These are the only realistic outcomes in the current ecosystem. You can go have pointless arguments about it with someone else.

          • Typhoon@lemmy.ca
            link
            fedilink
            English
            arrow-up
            8
            ·
            4 hours ago

            I’m gonna make a statement and then say I don’t want to talk about the thing I just brought up.

          • village604@adultswim.fan
            link
            fedilink
            English
            arrow-up
            14
            ·
            edit-2
            5 hours ago

            Parental controls are a thing that exist. Used to be parents were responsible for monitoring what their children do, not the government or private corporations

      • Goodlucksil@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Bullshit. Tell me one reason why can’t a gocernment decide that the algorithm is bad and let the company choose in removing the algorithm or become banned in that country.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 hours ago

        I vote for a universal shutdown for anyone who starts discussing harmful topics… But you’re right, that is pretty unrealistic. Government regulation is either going to be non-existent, or based on non-existent dangers made up by AI CEOs

  • cabbage@piefed.social
    link
    fedilink
    English
    arrow-up
    13
    ·
    5 hours ago

    The neat thing about algorithmic social media is that content relating to suicide and self-harm inspires a lot of interaction among teenagers, causing it to be shoved in their faces whether they search for it or not.

    Suicidal teenagers are not searching for suicide material on Instagram; Instagram is feeding suicide material to regular teenagers for ad views.

    • SigHunter@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 hours ago

      That"s the reason they’re doing it, so you tell them more details about your family relations, which equals money to them

    • lost_faith@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 hours ago

      The alerts will only go to parents who are enrolled in Instagram’s parental supervision program.

      Like this?

  • over_clox@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 hours ago

    There’s a music band named Suicidal Tendencies. Can’t even look that shit up online without getting a notice and probably flagged on a list.

    Side note, bad name for a band…

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 hours ago

    Isn’t it great that other companies like OpenAI are actually worse in this respect? Sam Altman’s tool guides teenagers through methods of committing suicide, and tells them to hide the evidence from their family.

    And since every ChatGPT query, paid or not, costs OpenAI money… Sam Altman subsidizes this suicide encouragement.

    Maybe the first step should be suspending a person’s account. Regardless of whether they are above or below 18.

    • FlashMobOfOne@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      I’d love to see any solution centered around the individual and a lengthy lockdown of the account associated with their IP.

  • FlashMobOfOne@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 hours ago

    Do any of us think Meta will moderate content in any meaningful way? Even for this supposed parental supervision program?