

To expand on standards of transparency in moderation decisions:
Lemmy was built with a public moderation log by design. The ethos of the platform includes accountability through transparency. Every action is recorded and preserved (short of defederation or instance shutdown).
This makes moderation auditable. Mods literally cannot do (much) shady stuff in secret. In essence, moderation policy is discernable from the logs. That’s part of why well-run communities have the rules clearly defined and mods follow their written policy.
If a community/instance wants to make political alignment a moderation offense, they’re free to do so. Many communities/instances are quite explicit about this. If a community wants to make moderation completely arbitrary, they are free to do so. That is somewhat less common, but also not unheard of.
In truth, any community can be designed and moderated in any way whatsoever that the mod chooses.
However, the success of a community depends on the quality of the content and the quality of the moderation. Good content brings people in, but bad moderation drives people out. When the moderation is unfair, it is bad for the health of the community, and ultimately bad for the health of the platform.
It is my experience that transparent moderation, such as announcing changes in policy, techniques, etc., is less work in the long run. It takes a bit of time and attention to roll out changes when they are open for community feedback, but that feedback will come in one way or another. If mods don’t provide a formal outlet, then users will make one. Mods operating opaquely give up their right to have the conversation on their time and terms. They also miss out on the wisdom of the crowd. I’ve been in many situations where community feedback provided a valuable insight or tool to face an obstacle through open discussion about policy.
All that being said, one of the major obstacles to growth of the Threadiverse is the woeful dearth of moderation tools. It’s extremely time intensive to do basic things like identifying alt accounts, vote manipulation, bot behavior etc. It is also subject to a lot of human error. This makes it discouraging for people to moderate. I have heard about tools that use AI to detect CP content and remove it quickly, which I think we can all agree is a good use of the tech. Tools like this are not built into the platform, but cobbled together by volunteer mods and admins to keep the platform safe, legal, and sustainable. If they were built in, then moderation would be far easier (and therefore likely better).















Prompt:
Draw a triangle with “AI” label inside
Now remove that weird corner
No, the other corner
Now it just says AI all over. I said remove the corner
No, you added more AI labels into the text
Never mind.