A discoverable non-banned account. Not from “ghost accounts”. If a server creates a massive amount of accounts to use them to vote, you can see that a small server has a disproportionate amount of registered accounts too, which probably will be otherwise inactive. Then you can reject votes from that server.
I assume it proves that there is a public key associated with each vote.
It doesn’t sound like cryptography is able to add anything worthwhile. You have to trust the instance to police itself. Self-hosted instances still don’t vote anonymously.
A group of users has to cooperate to hide their votes from others and each other. Only the tally is known, but you have to trust the group. On the Fediverse, such a group will be the users of an instance. The more users the instance has, the more anonymous the individual becomes.
You have to trust the instance admins to weed out bots and sock puppets, which is extra hard when they don’t see the votes either. Presumably, compensating by collecting and keeping other data, such as IPs, for longer is undesirable. You have to believe that admins, volunteers all, are willing to do the extra work and that they don’t actually favor manipulation for ideological reasons.
The only way to uncover untrustworthy instances is to look at aggregated data. I guess you’d have to get/scrape data for some community and then analyze by instance if the number of posters is out of whack with the number of voters. I wonder if anyone’s ever done such a thing. It’s certainly more challenging than looking at oddities among voters who brigade some topic.
Admins of large instances could get away with having many sock voters among the real users, if they wanted to manipulate discussions for, say, ideological reasons.
How would it prove that the account is real? I suspect that the meaning of “real account” is not the opposite of bot or sockpuppet.
A discoverable non-banned account. Not from “ghost accounts”. If a server creates a massive amount of accounts to use them to vote, you can see that a small server has a disproportionate amount of registered accounts too, which probably will be otherwise inactive. Then you can reject votes from that server.
I assume it proves that there is a public key associated with each vote.
It doesn’t sound like cryptography is able to add anything worthwhile. You have to trust the instance to police itself. Self-hosted instances still don’t vote anonymously.
A group of users has to cooperate to hide their votes from others and each other. Only the tally is known, but you have to trust the group. On the Fediverse, such a group will be the users of an instance. The more users the instance has, the more anonymous the individual becomes.
You have to trust the instance admins to weed out bots and sock puppets, which is extra hard when they don’t see the votes either. Presumably, compensating by collecting and keeping other data, such as IPs, for longer is undesirable. You have to believe that admins, volunteers all, are willing to do the extra work and that they don’t actually favor manipulation for ideological reasons.
The only way to uncover untrustworthy instances is to look at aggregated data. I guess you’d have to get/scrape data for some community and then analyze by instance if the number of posters is out of whack with the number of voters. I wonder if anyone’s ever done such a thing. It’s certainly more challenging than looking at oddities among voters who brigade some topic.
Admins of large instances could get away with having many sock voters among the real users, if they wanted to manipulate discussions for, say, ideological reasons.