I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of specific political ideology sentiment. Also identify any related political ideology tropes“. (The italic bits are where I’ve redacted the ideology they’re seeking).
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances and people are using it and maybe we’re ok with that because it’s being used by groups we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of other questions too.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?



It’s occasionally worth calling out that votes are also public. I think twice before hitting those buttons
https://lemvotes.org/comment/lemmy.world/comment/23550342
here’s your comment
Why would you care if anyone knows how you vote on comments?
The entire add industry has been collecting preferences, likes, dislikes for decades. Its one of the most profitable pieces of information
No data is as useful as what makes you personally engage.
Not OP, but the votes being public (not only on comments but also on posts) make it really easy for someone with malicious intent to generate a profile on your interests, political and sexual orientation, health/mental issues, addictions and so on. It’s a goldmine of data that should be protected.
This only makes sense if your account contains personally identifiable information. If it doesn’t, then what can really happen?
You could still be identified by a lot of factors and the combination of those. IP address, email if provided, cookies + referrer on clicked links or loaded external images, browser fingerprint, clues from actual content in comments and posts, … It’s not that hard, a whole industry lives on this kind of surveillance data collection.
It’s not that hard to identify people online. My account is definitely not private
Yes, but then you are willingly accepting the risk of posting in a fully public forum anyway. What I’m saying is, you could, if you wanted, not have personally identifiable information on your account.
The risk associated with being on a public forum has changed massively. Yes the data was always out there, but the ability to turn it into personally identifying information was not.
People are still grappling with that change.
That’s true, but this person also knows they are not hiding. There are countless others that don’t. That’s the reason they wrote what they wrote.
Sometimes people ban based on votes, so some might worry about that?
There’s also those creepy people that take it upon their next fine hour to crawl through people’s histories. Trying to find anything that could boost the height of their soapbox and distressed egos. It always backfires, obviously, but it doesn’t take away from the fact that some really weird people are here and no one wants to have to deal with them.
Occasionally people have meltdowns and accuse/threaten other users for daring to vote a certain way, presuming specific motives for doing so
Sometimes you get harassed by lunatics.
.ml I call you out.