• arcine@jlai.lu
    link
    fedilink
    arrow-up
    31
    ·
    21 hours ago

    AI Bros were really like “Reddit is one of our very few sources of usable data. What if we poisoned it too ? 🤪”

    Way to go guys ! Have fun with your degenerate data sets, and the resulting consanguine models that are 100% unusable as a result 😘

  • luciferofastora@feddit.org
    link
    fedilink
    arrow-up
    14
    ·
    1 day ago

    I sometimes wonder how prevalent bots are on Lemmy. On one hand, the barrier for entry might be lower / the effectiveness of bans harder to gauge. On the other, I’d think we’re a smaller target, less attractive as a target.

    Either way, the potential to accuse dissenters of being bots or paid actors is a symptom of the general toxicity and slop spilling all over the internet these days. A (comparatively) few people can erode fundamental assumptions and trust. Ten years ago, I would’ve been repulsed by the idea of dehumanising conversational opponents that way (which may have been just me being more naive), but today I can’t really fault anyone.

    In terms of risk assessment (value÷effort), I’m inclined to think something with the reach of Ex-Twitter or reddit would be a more lucrative target, and most people here actually are people—people I disagree with, maybe, but still a human on the other side of the screen. Given the niche appeal, the audience here may overall be more eccentric and argumentative, so it’s easy to mistake genuine users for propaganda bots instead of just people with strong convictions.

    But I hate that the question is a relevant one in the first place.

    • Goodman@discuss.tchncs.de
      link
      fedilink
      arrow-up
      8
      ·
      22 hours ago

      We are the web. There is no web without the we.

      It is ultimately humans who add value to the internet. We can make decisions, take action, have bank accounts, bots for the most part still can’t. If we keep growing, there will come a time where swaying opinions, impressing advertisements or driving dissent will reach that value/effort threshold, especially with the effort term shrinking more everyday

      I think that we are genuinely witnessing the end of the internet as we know it and if we want meaningful online contact to persist after this death, then we should come up with ways that communities can weather the storm.

      I don’t know what the solution is, but I want to talk and think about it with others that care.

      On the individual level we can maybe fortify against the reasons that might make someone want to extract that value.

      • Being a principled conscious consumer makes you a less likely target for advertisement
      • Avoid ragebait and clickbait, and develop a good epistemic bullshit filter along with media literacy, this makes it more difficult to lie to you, or to provoke outrage.
      • Unfortunately, be selective with your trust. How old is the user account? are the posting hours normal? does the user come across as a genuine human being that values discussion and meaningful online contact?
      • Be authentic and genuine. I don’t know how else to signify that I am real (shoutout to the þorn users)

      I would love to hear what others think.

      • luciferofastora@feddit.org
        link
        fedilink
        arrow-up
        3
        ·
        17 hours ago

        are the posting hours normal?

        Hey, no judging my sleep schedule arbitrary times when biological necessity triumphs over all the fun things I could do while awake!


        Serious reply:

        On the individual level we can maybe fortify against the reasons that might make someone want to extract that value.

        On the collective level, we should do something about the mechanisms that incentivise that malicious extraction of value in the first place, but that’s a whole different beast…

        Being a principled conscious consumer makes you a less likely target for advertisement

        Agreed, though we should also stress that “less likely” or “unlikely” doesn’t mean “never” and that we’re not immune against being influenced by ads. That’s a point I’ve seen people in my social circles overlook or blatantly ignore when pointed out, hence me emphasising it.

        media literacy

        This is probably one of the most critical deficits in general. Even with the best intentions, people make mistakes and it’s critical to be aware of and able to compensate that.

        Unfortunately, be selective with your trust.

        Same as media literacy, I feel like this is a point that would apply even in a world where we’re all humans arguing in good faith: Others may have a different, perhaps limited or flawed perspective, or just make mistakes — just as you yourself may overlook things or genuinely have blind spots — so we should consider whose voice we give weight in any given matter.

        On the flipside, we may need to accept that our own voice might not be the ideal one to comment on something. And finally, we need to separate those issues of perspective and error from our worth as persons, so that admitting error isn’t a shame, but a mark of wisdom.

        Be authentic and genuine

        That’s the arms race we’re currently running, isn’t it? Developers of bots put effort into making them appear authentic—I overheard someone mention that their newest model included an extra filter to “screw up” some things people have come to consider indicators of machine-generated texts, such as these dashes that are mostly used in particular kinds of formal writing and look out of place elsewhere.

        If at all, people tend to just use a hyphen instead - it’s usually more convenient to type (unless you’ve got a typographic compulsion to go that extra step because that just looks wrong). And so the dev in question made their model use less dashes and replace the rest with hyphens to make the text look more authentic.

        I wanted to spew when I heard that, but that’s beside the point.

        So basically, we’d have to constantly be running away from the bots’ writing style to set ourselves apart, even as they constantly chase our style to blend in. Our best weapon would be the creative intuition to find a way of phrasing things other humans will understand but bots won’t (immediately) be able to imitate.

        Being creative on demand isn’t exactly a viable solution, at least not individually, and coordinating on the internet is like harding lolcats, but maybe we can work together to carve out some space for humanity.

        • Goodman@discuss.tchncs.de
          link
          fedilink
          arrow-up
          2
          ·
          11 hours ago

          Thanks for your comments. I agree with everything you said especially that these traits are desirable for broader life IRL. In a way the web culture is a reflection of our own cultures just more mixed, extreme, amplified and with a good dose of parasociallity. I desperately want people to break free of their cycles. Think, talk, discuss, empathize and form communities, use your free will for good damit. These are the real antidotes that will enable the cultural shift that will allow us to reject the smothering of the human spirit in the current way of life.

          Anyways, it is a terrible thing that there is an armsrace to be authentic. This really ought to be solved on the user registration side. And also yes, saying something profound with hidden meaning through creative intuition is great, I write poems sometimes. But its not the solution to authenticity online.

    • alzjim@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      24 hours ago

      Clawdbot is an AI that takes full control of the PC, can open browsers, read pages, send an email, delete files, operate the CLI, install programs, anything you can do on a PC they can do. A farm is a group of PC’s/servers.

      So this is a group of AI ran computers, being use for content manipulation on reddit.

        • Crashumbc@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          ·
          21 hours ago

          Money, Reddit makes money by selling advertising.

          They don’t care if it’s quality or not, more comments/posts equal more money. Capitalism 101.

          • [object Object]@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            20 hours ago

            Reddit’s advertising clients would see that the clickthrough rate is shit. Which dictates the price that they are willing to pay for the advertisement.

            • Mubelotix@jlai.lu
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              17 hours ago

              Have you ever clicked on a reddit ad? Many users have shit stats, and if that was used for detecting bots, then bots would click ads, reddit would lose trust, and that would damage them even more

              • [object Object]@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                16 hours ago

                This has nothing to do with users’ stats. This has to do with how advertisers judge if it’s worth to place ads on a particular platform.

          • Bluewing@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            21 hours ago

            They don’t care to a certain extent. But there is a threshold that if you get too many bots, companies using your platform to advertise will notice a fall in sales. And if sales drop on your platform, the money stops. Because bots don’t buy things, real humans do.

            What is the threshold? I don’t know, I didn’t stay at a Holiday Inn last night.

  • Bytemeister@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    2 days ago

    This guy just openly admitted to shitting in the global punch bowl.

    It would really be a shame if someone everyone sent an army of bots to antagonize him at every waking moment of the day.

  • LiveLM@lemmy.zip
    link
    fedilink
    English
    arrow-up
    43
    ·
    edit-2
    2 days ago

    Everyone is cooked, you are all cooked

    Thanks for making the problem worse, fuck you too man.

    • Wildmimic@anarchist.nexus
      link
      fedilink
      English
      arrow-up
      63
      ·
      3 days ago

      Yeah, but at least this post is interesting; it shows how godawful humanity as a whole is at detecting bots in the wild.

      2 out of 400 bad.

      • socsa@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 hours ago

        Not getting bots banned is honestly not hard. If you just post uncontroversial vanilla shit they don’t care.

      • porous_grey_matter@lemmy.ml
        link
        fedilink
        arrow-up
        68
        ·
        3 days ago

        That assumes that Reddit actually wants to ban bots. But as long as they’re not too obvious, the bots are valuable to them, since they inflate the user count.

        • GreatAlbatross@feddit.uk
          link
          fedilink
          English
          arrow-up
          11
          ·
          2 days ago

          “Bots? No, no, those are active users. They also don’t use adblockers, so they’ve better than regular users!”

      • hector@lemmy.today
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        Reddit isn’t trying though. Social media is hooked by big business interests and governments, and there is overlap there. I can spot influence agents, mechanized trolls, supported by bots, you can bet they could better with their tools and analytics.

        As we’ve seen for the last ten years, social media only takes down bots/influence agencies researchers or others make impossible to ignore, and they’ve cut those researchers off from the information they were using to that effect. Now it’s only agencies the US government aligned groups highlight that will get removed, alleged Iranian, and the like, a bit player.

        These inauthentic accounts vastly inflate their numbers, make advertising more valuable. Even as they make the sites less useful, and drive away real users, it’s also assumed that users have no where else to go so why push back on governments and big business ratfucking the sites that can hurt them in myriad ways?

        Not until we build a fediverse that can get critical mass to take off will we see them fight for real people’s use of their site.

        • Goodman@discuss.tchncs.de
          link
          fedilink
          arrow-up
          4
          ·
          22 hours ago

          How would we defend ourselves from such a bot flood though?

          Let’s say that we start to become competitive with one of these big tech companies user wise. What is stopping them from destroying the fediverse with bots by sewing dissent, hate and slop?

          Perhaps the answer to such a scenario would just be to splinter, defederate and sort out the bot issue with better user registration.

          Happy to hear your thoughts.

          • hector@lemmy.today
            link
            fedilink
            arrow-up
            2
            ·
            21 hours ago

            There are a few options. The best of those is part of a larger reform of how instances, and general forums they interact on, could be run. Rather than moderators that just decide on violations, bans, etc., we have a clear set of rules, with a clear set of appeal processes for bans and the like. Culminating in a jury trial of members for that instance. Maybe a higher court to decide strictly on liability grounds for users that endanger legality even with acquittals.

            Beyond that, instances could have some sort of process, maybe even election of qualified users* to appoint censors, that would have tools to hunt bots and influence operations, and flags of users would be forwarded to them, and moderators. Any enforcement actions would go through that appeal process to prevent abuses of power or misapplications of rules.

            I’d say, do it like Rome did, for every elected position, and I will get to some others, not to elect one, but two. The first two highest vote getters each get elected with the same powers. It worked for 500 years for them.

            There are some other positions we could even do elections through. Now who qualifies for elections? We could have threads where votes of reasoned arguments determine it, votes from qualified people that pass captchas perhaps. It’s kind of a chicken or egg problem with voting online if influence agents and chatbots and bots are voting is the problem. Agents could cycle through accounts and do captchas, and chatbots of llm’s might already be able to complete the captchas, so that might not work.

            How else could we limit voting? Maybe just by making reasoned arguments for why we should be on the voting lists, and having users with their own positive voting record able to vote, as the bots and chatbots won’t have much karma without being spotted by the censors and moderators and the like?

            So I got bogged down here, but to summarize, to appoint two censors, selected by the community for 1 year terms, or whatever, that can hunt and charge accounts for removal/banning, under clear sets of rules that can be appealed to jury trials of users of the instance. Secure online trials. Maybe tests for suspected accounts.

            The trickier part, making a system of real good faith users to be able to vote so influence agents and bots and llm’s don’t ratfuck the votes, jury trials, etc. There would be ways, we could even establish secure end to end encryption to verify real users person to person if the person agrees, just spitballing here,

            But to maybe think about some other elected positions every term to fulfill other functions of the community. To have the clear rules and appealable enforcements to a jury trial of real users.

            Because a censor that is vetted and possessing of some analytics tools, and moderators and administrators as they are able, would be able to hunt down suspected bots and influence agents and have them removed. Not all but a lot of them. Industry ones that work off of keyword for instance, say glyphosate bad and an influence agent with bots pops up in a half hour and argues endlessly if you argue back, it’ not subtle. The ones pushing for Iran forever war as we speak, also many are not subtle.

            One more thing to add. To make a separate form of karma, that results from doing favors for the community, for others, that can be traded like favors and used to qualify people. We can have real world versions for some social media applications, to be traded like credits or money, and instance type versions, not based on votes neccessarily, but for doing jobs for the community, for acting successfully as censor, or moderator, or administrator, or whatever other functions.

            • Goodman@discuss.tchncs.de
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              19 hours ago

              Thanks for the elaborate write-up, it’s good to engage in discussion about these things. An electoral system seems quite elaborate, but I would love to see it work. It would require quite active participation by the users, and I am not convinced that smaller instances would have the active user base for this. Even so, the idea is quite appealing.

              Vetting good faith users is indeed one of the difficult problems. We could make better captchas maybe. For example taking some pictures of household objects in certain orientations. You could also check the exif flags to check if they line up with known cameras or something. Just an idea.

              I like your idea of social credit/karma, but I have some questions. So imagine that an instance can hand out these social points and so can another instance. How would we equate the value of one system to another? What is stopping my instance from minting a bunch of social points and giving them to me to elevate my “trustworthiness” in your instance?

              I also had an idea for a trust system based on belief systems. So as per Decartes I know that I am (real), and I have met some people IRL that are also real, so whenever they message me, I have a high degree of trust that they are not bots. I would also feel relatively inclined to trust the friends of my friends, with that trust decaying as we move down the friend chain. You could sort of make a “trust tree” where you would be able to see how many trust steps you are removed from IRL validation. You could even weigh the scores and downgrade someones trust score in the tree if it turned out that one of their contacts turned out to be a bot or something like that.

              Thanks for sharing your thoughts. Genuine discussion and meaningful interactions make this a better place.

              • hector@lemmy.today
                link
                fedilink
                arrow-up
                2
                ·
                18 hours ago

                I think your ideas are good. Vetting definately needs some workshopping as you say, I like the trust based karma, I think several kinds of karma could be useful, including operational karma as below, but trust karma for sure as well.

                As to the karma and jury trials and censors and elected officials, it could be unions of instances pool ingtogether for jury pools and sharing censors, which I think is a real key evolution of federated social media, with the changes coming from llm’s and chatbots, the US government alone is making armies of them, worked by contractors no doubt.

                Having an internet that isn’t dead, a place to talk, and a place with a fair set of rules that are enforced in a way you can be sure to get a fair hearing in if challenged, would at some point help federated social media get the exodus from silicon valley social media as the administration, and other countries all take illiberal turns and subjugate it.

                What is needed more than anything, is a new social media, federated interoperable with existing fediverses like lemmy, whereby innumerable instances can organize on a general forum(s) on issues they agree on, and work publicly and privately as they see fit to work towards common goals. Be it groups agreeing to do or not do something if a politicians does or doesn’t do something. Or to find and groom political candidates. To crowdsource information on companies that cheat people, to set up stings for instance to companies that wage theft their employees, set up workers to go in and record them doing it, crowdsource investigation, and then use the group to push for consequences, all in secret as to the organization. That would be one type of karma, karma from actually doing something, for contributing to a mission.

                That karma could then be used like currency to get people to help you with your pet project for instance. But the groups could do investigations like bellingcat, for instance the Epstein victims that want justice and are denied it, we can organize non public groups to coordinate with them, with proper operational security, and we’ve plenty of people on here to help us do that proper, which they would get say Operational Karma for, and the group could run down leads, investigate, could commission hackers to earn operational karma in finding documents, what have you, find witnesses and get their statements, run down their leads, organize a timeline, etc. Then we could try to use that information, people able to use it that gets play get their own operational karma for it.

                Those are just two small examples, but this could be a big force, if set up decentralized where innumerable instances can pop up whack a mole if shut down, on a new general forum endlessly, with opsec for when it’s needed, for when it does threaten monied interests and attract the ire of authorities. It would come in handy for the election too, to have groups to crowdsouce what we know, what is going on, a place to securely share with trusted communities private or public information we’ve come into the possession of, to decide what and when to share with journalists. To be able to work to get articles into publication when it’s needed, social media posts, etc.

                For instance private groups set up to catch Republicans cheating at voting machine level. They got the inner workings, the hard drives of all the voting machines after 2020 from Lindell’s website and operation he set up. They illegally got their hands on like all swing state voting machine software, and probably all the red states, maybe all states by now. AZ, MI, GA, WI, PA I think. To set up ways to catch them at it. There are a lot of possibilities.

                But things removed from politics too, all encompassing, crowdsourcing new and better ways of doing things, new inventions, people could get the help and resources to run down their ideas if they can entice others to join in, they can allocate credit to each other, and get a cut of any future proceeds from it, the site would get a cut, and it could be made and sold as a benefit corporation, not to maximize profit, but to provide a needed product or service at a good cost in an area the private sector hasn’t met the needs of society, while providing a reasonable rate of return and equity for those that invested in it. It could work better than all profit, and fulfill the role government refuses to regulate business into providing or providing themselves.

                Then Consumers Unions. To pressure companies to make better products, make demands on manufacturers, not buy products if they don’t change their behavior.

                Anyway, so there is the political angle, of public and private groups federated to cooperate on what they agree on; there is an investigative angle with operational karma; there is an investors/inventors type of union to provide businesses to fulfill a role the private sector and government refuses to provide by operating not only on the profit motive; and consumers unions.

                If done with the right system that has the trust of people, and is resistant to government and big business fuckery, nimble in reincarnation should any part get shut down, with operational security sooner than later, it could remake society. The best time to do this was 20 years ago, but the next best time is now.

                • Goodman@discuss.tchncs.de
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  11 hours ago

                  Haha you did not answer my questions, but you are clearly passiinate about this vision and I like that. As I understand it you describe a sort of moral credit that has value within the community that hands it out. So I imagine that a board would mint these tokens? What would these tokens buy you?

                  We will grant that most in the community will be commited to the cause so they will want to participate, but other than respect, why and what should I grant/sell you (you having some credit) for helping the cause. Couldn’t I just grant my effort to the cause directly? I get the renown aspect, but we also have commemorative mission patches pins and stickers for that.

                  So in short, I am not questioning the renown/trust mechanism of a moral credit system, but I am questioning the monetary function.

                  Don’t take this a rejection of the idea, the rebirth of the internet has to start somewhere and that might be here by visionaries.

      • Trainguyrom@reddthat.com
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 days ago

        Could simply be that only 2 have been fully banned by Reddit but most have tons of subreddit bans and/or shadowbans. On the other hand, Reddit is such a cesspit these days I wouldn’t be too shocked if they just exist on Reddit shitposting slop

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    61
    ·
    edit-2
    2 days ago

    Reddit has shown through its actions that it’s more interested in banning real users than bots, and wants to protect bots from being identified and called out by users, so it’s not that surprising they’ve been able to do this.

  • NostraDavid@programming.dev
    link
    fedilink
    arrow-up
    10
    ·
    2 days ago

    “Reddit is just you, me, and /u/Karmanaut”

    I never thought I’d see the day when this adage would become true again, let alone in this way 😂

  • SGforce@lemmy.ca
    link
    fedilink
    arrow-up
    92
    ·
    3 days ago

    Such an inefficient way to astroturf. Just copy old comments and markov-chain basic shit. Reddit has been mostly bots for years and years.

      • Tavi@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        Too hard. Have an LLM summarize each comment in an old comment chain so that it obliterates any meaning and burries any real engagement. (I have no evidence, but I think Reddit is scraping external sites and turning posts into comment chains)

  • OwOarchist@pawb.social
    link
    fedilink
    English
    arrow-up
    47
    ·
    2 days ago

    And yet I get constantly shadowbanned there just for using a VPN…

    I think reddit likes bots more than it likes real users.

    • x00z@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Bots nowadays use residential proxy networks. When people use a free VPN or other shady software, they might become part of the network, and bad actors can route traffic trough their devices.

      • Mubelotix@jlai.lu
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        17 hours ago

        It’s quite hard to buy residential proxies though. Almost every company selling them cheats and lies on the product, and the ips they have are absolute garbage as many people ruined their reputation already

    • dorkynsnacks@piefed.social
      link
      fedilink
      English
      arrow-up
      17
      ·
      3 days ago

      So far there’s no money to be made here. Influence and reach is also limited.

      If that changes at some point, it might be the end of the Fediverse. It’s far too open to bots. A spammer can not only easily create new accounts on instances, they can run their own instances.

      • pkjqpg1h@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        and if that instance start spamming I’ll ban them in 30 seconds, You should learn more about Fediverse

      • zabadoh@ani.social
        link
        fedilink
        arrow-up
        9
        ·
        3 days ago

        Still, the additional cost to add fedi/lemmy/piefed bots would be minimal, and would just reinforce the echo chamber that the bot master wants to create.

    • Bristlecone@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      2 days ago

      That’s true to a certain extent, but I think the fediverse isn’t all that attractive to these types of people. Additionally I think we are way better prepared to handle mass bot bans and detection since we aren’t as whorish here in the fediverse

      • rbos@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        The ratio of human admins to users is better too, I think that will work in our favour.

  • aesthelete@lemmy.world
    link
    fedilink
    arrow-up
    47
    ·
    edit-2
    2 days ago

    The days of having arguments with Internet strangers and knowing they aren’t a bot are officially over. It’s hard to tell exactly when the period ended, but it’s definitely done now.

    • pkjqpg1h@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      No bro LLMs are not that smart, maybe some low IQ Reddit/TikTok people can’t see the difference but We do

    • demizerone@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      2 days ago

      I did that only twice and it never did it again. Arguing with people on the internet is pointless to begin with.

    • sheogorath@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      Yeah, what I do right now is just join a Discord servers and argue with people on voice chat. YMMV tho, I accidentally made some lifelong friends this way.

    • hector@lemmy.today
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      Plus chatbots now are getting more sophisticated, the government sponsored influence operations have the newest generation chatbots too, Israel of course the main one, but other subjects as well, like hyping the US military after the venezuela operation, which was in fact a military coup, the US made a deal with their generals to stand down and give up a few loyalist units and the presidente in exchange for the military being the de facto ruler, civilian government puppets now. Helicopters are vulnerable to weapons systems the army couldn’t take our reliably without an agreement to stand down, not the least in one of the most protected places in the country when mobilized.

      But after that operation, when they were pretending the US did it under the nose of the military and not in cooperation with them, as they were threatening Cuba and Colombia, and Canada and Denmark, and Iran, and Yemen, et al, their chatbots and influence agents were running wild, entire divisions of mechanized trolls and chatbots. It was not subtle. Argue with them too much they will mass flag you and reddit will side with them, not you.

      Despite us being the ones that drive use, and the bots and agents making a less enjoyable, less useful, and less used site. They don’t care, they can’t see past the next set of financial statements. Bots inflate their numbers, and the government and powerful interests can jam them up if they don’t lick their boots and pretend to believe them, ban the users they tell them to, etc.

      They figure we have no where else to go, so they don’t have to cater to us anyway. Let’s make them wrong about that.