I know this topic, as well as most online topics, is more about emotions than facts. But here we are. You probably don’t understand how Discord’s new policy is intended to help protect children because you aren’t trained to think like a child predator. (That’s fine.) I’ve had to take a lot of child safety trainings over the years for professional reasons. Here’s how online child predators work.

They start by finding a kid with a secret. Just like a lion will generally choose to attack the weak gazelle, child predators go after vulnerable kids.

They find the kids with a secret and say “hey want to see some porn?”, and of course the kid is curious. They didn’t start with anything bad. This is a process for them. But they will tell the kid, “be sure you don’t tell your parents about this. This is our secret." Then they slowly try to pull into deeper and deeper secrets and start to blackmail the kid. They start to demand that the kid send them nude photos. They trap the kids into deeper and deeper secrets and guilt to get more and more out of them. In the worst cases this results in meetups with the predator in person.

The easiest places for the predators to start this process is porn sites where the kids are visiting in secret to begin with. Especially those on Discord where the messaging between users is the main feature. Those are the kids that are most vulnerable.

How how is Discord’s policy supposed to protect kids? The goal is to keep the vulnerable kids out of spaces where they would be targeted to begin with.

So there you go. I’m all ears for how to do this better. That’s one beef I have with the EFF right now. They offer no alternative solutions to this. They just didn’t want any kind of protections at all.

  • Skavau@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    You unironically think that governments are going after hosts that have, in many cases, less than 1000 active monthly users purely because they don’t have age-ID services on their platform?

    • 1dalm@lemmings.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      100% they would. Yeah.

      If child pornography was found to be stored on a host’s server by one of their 1000 users, “I didn’t think you guys would care about a platform with less than 1000 monthly users” isn’t going to be a great argument in courts.

      • Skavau@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        100% they would. Yeah.

        How would they know?

        If child pornography was found to be stored on a host’s server by one of their 1000 users, “I didn’t think you guys would care about a platform with less than 1000 monthly users” isn’t going to be a great argument in courts.

        You’re talking here specifically about child pornography. Not just not age-verifying users to access ‘adult’ content. No server, to my knowledge, allows this.

        • 1dalm@lemmings.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          How would they know?

          Well, if, and really when, a predator is caught by the police, that police department will do a full investigation and find all the places they are having communications with kids. Sooner or later, one will be found to be using Lemmy. On that day, the host is going to need a good lawyer.

          It’s not enough to “not allow this”. A person that allows anonymous strangers to use their servers to store information in secret is asking for trouble. They need to take much more care than that.

          And I never said that age-verification is the only solution to this problem. >>

          • Skavau@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            It’s not enough to “not allow this”. A person that allows anonymous strangers to use their servers to store information in secret is asking for trouble. They need to take much more care than that.

            What extra care should they take beyond deleting it when they find it? Which they do.

            And I never said that age-verification is the only solution to this problem. >>

            Remember, I originally started this chain by asking you if every single site online should be forced to implement age-ID and you said yes.

            • 1dalm@lemmings.worldOP
              link
              fedilink
              arrow-up
              1
              ·
              2 days ago

              Remember, I originally started this chain by asking you if every single site online should be forced to implement age-ID and you said yes.

              Fair. But I really meant that every network should have policies in place, where as age-verification is one option. Elsewhere on this thread you’ll see that I offer alternative solutions, such as simply keeping everything public and not allowing 1-to-1 messaging.

              • Skavau@piefed.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                Everything on the fediverse is publically viewable (although Piefed has private communities capacity now), but banning DMs is pretty unacceptable really.

                • 1dalm@lemmings.worldOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 days ago

                  Not exactly, and not for long. Mastodon, for example, is working on end-to-end encryption in messages. Matrix is also private by design.

                  And again, it’s not that I think end-to-end encrypted one-to-one messaging is bad. But if you are going to offer it then you need to be held responsible for it.

                  • Skavau@piefed.social
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    2 days ago

                    Not exactly, and not for long. Mastodon, for example, is working on end-to-end encryption in messages. Matrix is also private by design.

                    I meant publicly viewable in the sense of being viewable by the wider audience. Excluding private messages specifically here.

                    And again, it’s not that I think end-to-end encrypted one-to-one messaging is bad. But if you are going to offer it then you need to be held responsible for it.

                    So what do you propose then?