• Wildmimic@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    63
    ·
    3 days ago

    Yeah, but at least this post is interesting; it shows how godawful humanity as a whole is at detecting bots in the wild.

    2 out of 400 bad.

    • socsa@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 hours ago

      Not getting bots banned is honestly not hard. If you just post uncontroversial vanilla shit they don’t care.

    • porous_grey_matter@lemmy.ml
      link
      fedilink
      arrow-up
      68
      ·
      3 days ago

      That assumes that Reddit actually wants to ban bots. But as long as they’re not too obvious, the bots are valuable to them, since they inflate the user count.

      • GreatAlbatross@feddit.uk
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        “Bots? No, no, those are active users. They also don’t use adblockers, so they’ve better than regular users!”

    • hector@lemmy.today
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      Reddit isn’t trying though. Social media is hooked by big business interests and governments, and there is overlap there. I can spot influence agents, mechanized trolls, supported by bots, you can bet they could better with their tools and analytics.

      As we’ve seen for the last ten years, social media only takes down bots/influence agencies researchers or others make impossible to ignore, and they’ve cut those researchers off from the information they were using to that effect. Now it’s only agencies the US government aligned groups highlight that will get removed, alleged Iranian, and the like, a bit player.

      These inauthentic accounts vastly inflate their numbers, make advertising more valuable. Even as they make the sites less useful, and drive away real users, it’s also assumed that users have no where else to go so why push back on governments and big business ratfucking the sites that can hurt them in myriad ways?

      Not until we build a fediverse that can get critical mass to take off will we see them fight for real people’s use of their site.

      • Goodman@discuss.tchncs.de
        link
        fedilink
        arrow-up
        4
        ·
        23 hours ago

        How would we defend ourselves from such a bot flood though?

        Let’s say that we start to become competitive with one of these big tech companies user wise. What is stopping them from destroying the fediverse with bots by sewing dissent, hate and slop?

        Perhaps the answer to such a scenario would just be to splinter, defederate and sort out the bot issue with better user registration.

        Happy to hear your thoughts.

        • hector@lemmy.today
          link
          fedilink
          arrow-up
          2
          ·
          23 hours ago

          There are a few options. The best of those is part of a larger reform of how instances, and general forums they interact on, could be run. Rather than moderators that just decide on violations, bans, etc., we have a clear set of rules, with a clear set of appeal processes for bans and the like. Culminating in a jury trial of members for that instance. Maybe a higher court to decide strictly on liability grounds for users that endanger legality even with acquittals.

          Beyond that, instances could have some sort of process, maybe even election of qualified users* to appoint censors, that would have tools to hunt bots and influence operations, and flags of users would be forwarded to them, and moderators. Any enforcement actions would go through that appeal process to prevent abuses of power or misapplications of rules.

          I’d say, do it like Rome did, for every elected position, and I will get to some others, not to elect one, but two. The first two highest vote getters each get elected with the same powers. It worked for 500 years for them.

          There are some other positions we could even do elections through. Now who qualifies for elections? We could have threads where votes of reasoned arguments determine it, votes from qualified people that pass captchas perhaps. It’s kind of a chicken or egg problem with voting online if influence agents and chatbots and bots are voting is the problem. Agents could cycle through accounts and do captchas, and chatbots of llm’s might already be able to complete the captchas, so that might not work.

          How else could we limit voting? Maybe just by making reasoned arguments for why we should be on the voting lists, and having users with their own positive voting record able to vote, as the bots and chatbots won’t have much karma without being spotted by the censors and moderators and the like?

          So I got bogged down here, but to summarize, to appoint two censors, selected by the community for 1 year terms, or whatever, that can hunt and charge accounts for removal/banning, under clear sets of rules that can be appealed to jury trials of users of the instance. Secure online trials. Maybe tests for suspected accounts.

          The trickier part, making a system of real good faith users to be able to vote so influence agents and bots and llm’s don’t ratfuck the votes, jury trials, etc. There would be ways, we could even establish secure end to end encryption to verify real users person to person if the person agrees, just spitballing here,

          But to maybe think about some other elected positions every term to fulfill other functions of the community. To have the clear rules and appealable enforcements to a jury trial of real users.

          Because a censor that is vetted and possessing of some analytics tools, and moderators and administrators as they are able, would be able to hunt down suspected bots and influence agents and have them removed. Not all but a lot of them. Industry ones that work off of keyword for instance, say glyphosate bad and an influence agent with bots pops up in a half hour and argues endlessly if you argue back, it’ not subtle. The ones pushing for Iran forever war as we speak, also many are not subtle.

          One more thing to add. To make a separate form of karma, that results from doing favors for the community, for others, that can be traded like favors and used to qualify people. We can have real world versions for some social media applications, to be traded like credits or money, and instance type versions, not based on votes neccessarily, but for doing jobs for the community, for acting successfully as censor, or moderator, or administrator, or whatever other functions.

          • Goodman@discuss.tchncs.de
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            21 hours ago

            Thanks for the elaborate write-up, it’s good to engage in discussion about these things. An electoral system seems quite elaborate, but I would love to see it work. It would require quite active participation by the users, and I am not convinced that smaller instances would have the active user base for this. Even so, the idea is quite appealing.

            Vetting good faith users is indeed one of the difficult problems. We could make better captchas maybe. For example taking some pictures of household objects in certain orientations. You could also check the exif flags to check if they line up with known cameras or something. Just an idea.

            I like your idea of social credit/karma, but I have some questions. So imagine that an instance can hand out these social points and so can another instance. How would we equate the value of one system to another? What is stopping my instance from minting a bunch of social points and giving them to me to elevate my “trustworthiness” in your instance?

            I also had an idea for a trust system based on belief systems. So as per Decartes I know that I am (real), and I have met some people IRL that are also real, so whenever they message me, I have a high degree of trust that they are not bots. I would also feel relatively inclined to trust the friends of my friends, with that trust decaying as we move down the friend chain. You could sort of make a “trust tree” where you would be able to see how many trust steps you are removed from IRL validation. You could even weigh the scores and downgrade someones trust score in the tree if it turned out that one of their contacts turned out to be a bot or something like that.

            Thanks for sharing your thoughts. Genuine discussion and meaningful interactions make this a better place.

            • hector@lemmy.today
              link
              fedilink
              arrow-up
              2
              ·
              20 hours ago

              I think your ideas are good. Vetting definately needs some workshopping as you say, I like the trust based karma, I think several kinds of karma could be useful, including operational karma as below, but trust karma for sure as well.

              As to the karma and jury trials and censors and elected officials, it could be unions of instances pool ingtogether for jury pools and sharing censors, which I think is a real key evolution of federated social media, with the changes coming from llm’s and chatbots, the US government alone is making armies of them, worked by contractors no doubt.

              Having an internet that isn’t dead, a place to talk, and a place with a fair set of rules that are enforced in a way you can be sure to get a fair hearing in if challenged, would at some point help federated social media get the exodus from silicon valley social media as the administration, and other countries all take illiberal turns and subjugate it.

              What is needed more than anything, is a new social media, federated interoperable with existing fediverses like lemmy, whereby innumerable instances can organize on a general forum(s) on issues they agree on, and work publicly and privately as they see fit to work towards common goals. Be it groups agreeing to do or not do something if a politicians does or doesn’t do something. Or to find and groom political candidates. To crowdsource information on companies that cheat people, to set up stings for instance to companies that wage theft their employees, set up workers to go in and record them doing it, crowdsource investigation, and then use the group to push for consequences, all in secret as to the organization. That would be one type of karma, karma from actually doing something, for contributing to a mission.

              That karma could then be used like currency to get people to help you with your pet project for instance. But the groups could do investigations like bellingcat, for instance the Epstein victims that want justice and are denied it, we can organize non public groups to coordinate with them, with proper operational security, and we’ve plenty of people on here to help us do that proper, which they would get say Operational Karma for, and the group could run down leads, investigate, could commission hackers to earn operational karma in finding documents, what have you, find witnesses and get their statements, run down their leads, organize a timeline, etc. Then we could try to use that information, people able to use it that gets play get their own operational karma for it.

              Those are just two small examples, but this could be a big force, if set up decentralized where innumerable instances can pop up whack a mole if shut down, on a new general forum endlessly, with opsec for when it’s needed, for when it does threaten monied interests and attract the ire of authorities. It would come in handy for the election too, to have groups to crowdsouce what we know, what is going on, a place to securely share with trusted communities private or public information we’ve come into the possession of, to decide what and when to share with journalists. To be able to work to get articles into publication when it’s needed, social media posts, etc.

              For instance private groups set up to catch Republicans cheating at voting machine level. They got the inner workings, the hard drives of all the voting machines after 2020 from Lindell’s website and operation he set up. They illegally got their hands on like all swing state voting machine software, and probably all the red states, maybe all states by now. AZ, MI, GA, WI, PA I think. To set up ways to catch them at it. There are a lot of possibilities.

              But things removed from politics too, all encompassing, crowdsourcing new and better ways of doing things, new inventions, people could get the help and resources to run down their ideas if they can entice others to join in, they can allocate credit to each other, and get a cut of any future proceeds from it, the site would get a cut, and it could be made and sold as a benefit corporation, not to maximize profit, but to provide a needed product or service at a good cost in an area the private sector hasn’t met the needs of society, while providing a reasonable rate of return and equity for those that invested in it. It could work better than all profit, and fulfill the role government refuses to regulate business into providing or providing themselves.

              Then Consumers Unions. To pressure companies to make better products, make demands on manufacturers, not buy products if they don’t change their behavior.

              Anyway, so there is the political angle, of public and private groups federated to cooperate on what they agree on; there is an investigative angle with operational karma; there is an investors/inventors type of union to provide businesses to fulfill a role the private sector and government refuses to provide by operating not only on the profit motive; and consumers unions.

              If done with the right system that has the trust of people, and is resistant to government and big business fuckery, nimble in reincarnation should any part get shut down, with operational security sooner than later, it could remake society. The best time to do this was 20 years ago, but the next best time is now.

              • Goodman@discuss.tchncs.de
                link
                fedilink
                arrow-up
                1
                ·
                13 hours ago

                Haha you did not answer my questions, but you are clearly passiinate about this vision and I like that. As I understand it you describe a sort of moral credit that has value within the community that hands it out. So I imagine that a board would mint these tokens? What would these tokens buy you?

                We will grant that most in the community will be commited to the cause so they will want to participate, but other than respect, why and what should I grant/sell you (you having some credit) for helping the cause. Couldn’t I just grant my effort to the cause directly? I get the renown aspect, but we also have commemorative mission patches pins and stickers for that.

                So in short, I am not questioning the renown/trust mechanism of a moral credit system, but I am questioning the monetary function.

                Don’t take this a rejection of the idea, the rebirth of the internet has to start somewhere and that might be here by visionaries.

    • Trainguyrom@reddthat.com
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      Could simply be that only 2 have been fully banned by Reddit but most have tons of subreddit bans and/or shadowbans. On the other hand, Reddit is such a cesspit these days I wouldn’t be too shocked if they just exist on Reddit shitposting slop