Sometimes people have something they want to say without it being traceable back to their main identity, and on internet forums it's common for people to use multiple accounts ("alts") for this. As machine learning software gets better, however, it becomes increasingly practical to link a person's accounts.
A few months ago someone ran a simple stylometry tool across Hacker News comments, and identified many alts, including one for the site's founder. To further demonstrate that this isn't just an academic consideration, I recently did the same for LessWrong and the EA Forum. I'm not going to share the code or the probabilities it generated, and I've only looked at the output enough to be pretty sure it's working. Trained on half of the data and tested on the other half it was consistently able to link accounts, however, and it was also able to identify at least one non-public alt account I already knew about.
This is an example of a general problem with privacy: even if something seems careful enough now, you can't trust the future to keep things private.
(If you do want somewhat more protection now, however, I think best practice is running your alt comments through an LLM to change the style.)
Yep, that's right! Please don't abuse the voting system, but overall we are happy for people to make multiple accounts, try to keep separate brands and identities for different topics you want to discuss, etc. (e.g. I think it would be pretty reasonable for someone to have an account where they discuss community governance stuff and get involved in prosecuting a bunch of bad behavior, and another account where they make AI Alignment contributions, without people knowing that they are the same person).