I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of specific political ideology sentiment. Also identify any related political ideology tropes“. (The italic bits are where I’ve redacted the ideology they’re seeking).

OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:

image

and so on, hundreds of comments.

I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances and people are using it and maybe we’re ok with that because it’s being used by groups we agree with but what if people we strongly disagree with used it on their instances tomorrow?

The use and existence of this tooling raises a lot of other questions too.

What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.

What safeguards do we need?

Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

What are our transparency expectations?

Is this acceptable and normal?

Should this tooling be disclosed? (it was not – should it have been?)

If you were given a choice, would you have opted out of it?

Can we opt out?

Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

Are private messages being scanned and sent to OpenAI?

How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

Once the user’s comments are sent to OpenAI, is it used to train their models?

What will the effect be on our discourse and culture if people know they are being politically profiled?

Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.

And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.

What do you make of this?

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 hours ago

    What safeguards do we need?

    None, defederate, switch instances if you disagree. Fediverse is open, there are no rules beyond what the instance owner puts in place.

    Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?

    Yes. That’s out of your control though, beyond switching instances.

    What are our transparency expectations?

    None, beyond what you trust of the admins in putting in their own transparency expectations. You should not expect any transparency from the admin, these are random people you are trusting with your data.

    Is this acceptable and normal?

    It’s their server, they can do what they like. Acceptable or normal are irrelevant.

    Should this tooling be disclosed? (it was not – should it have been?)

    Again, who would force it? Even if Lemmy/Piefed forced a checkbox, they could just fork it to ignore that checkbox.

    Can we opt out?

    If the instance owner opted for that, it’s on a server by server basis

    Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?

    Probably, but everything here is open and unencrypted, see my other comments. It’s hard to argue that what you put on here in an unencrypted and open platform which is then blasted out to any other server who wants to listen had an expectation of privacy.

    Are private messages being scanned and sent to OpenAI?

    You should assume your unencrypted open DMs are scanned by anyone. If privacy is needed, follow the guidelines and switch to something like Matrix for DMs. ActivityPub is open.

    How long should these assessments be retained and can we request to see it, or ask for it to be deleted?

    Per legal requirements, or asking your instance owner.

    Once the user’s comments are sent to OpenAI, is it used to train their models?

    Depends on how the instance owner set up their API usage. For a user, you should assume yes.

    What will the effect be on our discourse and culture if people know they are being politically profiled?

    Open to wild speculation, and I assume others will do that for me.

    Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?

    About the same as before. Bias in, bias out. Whatever the bias of the mods previously had, positive or negative, will continue through.