For the last few months I've taken up the habit of explicitly predicting how much karma I'll get for each of my contributions on LW. I picked up the habit of doing so for Main posts back in the Visiting Fellows program, but I've found that doing it for comments is way more informative.
It forces you to build decent models of your audience and their social psychology, the game theoretic details of each particular situation, how information cascades should be expected to work, your overall memetic environment, etc. It also forces you to be reflective and to expand on your gut feeling of "people will upvote this a lot" or "people will downvote this a little bit"; it forces you to think through more specifically why you expect that, and how your contributions should be expected to shape the minds of your audience on average.
It also makes it easier to notice confusion. When one of my comments gets downvoted to -6 when I expected -3 then I know some part of my model is wrong; or, as is often the case, it will get voted back up to -3 within a few hours.
Having powerful intuitive models of social psychology is important for navigating disagreement. It helps you realize when people are agreeing or disagreeing for reasons they don't want to state explicitly, why they would find certain lines of argument more or less compelling, why they would feel justified in supporting or criticizing certain social norms, what underlying tensions they feel that cause them to respond in a certain way, etc, which is important for getting the maximum amount of evidence from your interactions. All the information in the world won't help you if you can't interpret it correctly.
Doing it well also makes you look cool. When I write from a social psychological perspective I get significantly more karma. And I can help people express things that they don't find easy to explicitly express, which is infinitely more important than karma. When you're taking into account not only people's words but the generators of people's words you get an automatic reflectivity bonus. Obviously, looking at their actual words is a prerequisite and is also an extremely important habit of sane communication.
Most importantly, gaining explicit knowledge of everyday social psychology is like explicitly understanding a huge portion of the world that you already knew. This is often a really fun experience.
There are a lot of subskills necessary to do this right, but maybe doing it wrong is also informative, if you keep trying.
A question of moral philosophy?
Because Less Wrong's extrapolated volition would have upvoted it, and if you didn't post it anyway then Less Wrong's extrapolated volition would be justified in getting mad at you for having not even tried to help Less Wrong's extrapolated volition to obtain faster than it otherwise would have (by instantiating its decision policy earlier in time, because there's no other way for the future to change the present than by the future-minded present thinkers' conscious invocation).
Because definitions of reasonableness get made up after the fact as if teleologically, and it doesn't matter whether or not your straightforward causal reasons seemed good enough at the time, it matters whether or not you correctly predict the retrospective judgment of future powers who make things up after the fact to apportion blame or credit according to higher level principles than the ones that appeared salient to you, or the ones that seemed salient as the ones that would seem salient to them.
This is how morality has always worked, this is how we ourselves look back on history, judging the decisions of the past by our own ideals, whether those decisions were made by past civilizations or past lovers. This pattern of unreasonable judging itself seems like an institution that shouldn't be propped up, so there's no safety in self-consistency either. And if you get complacent about the foundational tensions here, or oppositely if you act rashly as a result of feeling those tensions, then that itself is asking to be seen as unjustified in retrospect.
And if no future agent manages to become omniscient and omnibenevolent then any information you managed to propagate about what morality truly is just gets swallowed by the noise. And if an omniscient and omnibenevolent agent does pop up then it might be the best you can hope for is to be a martyr or a scapegoat, and all that you value becomes a sacrifice made by the ideal future so that it can enter into time. Assuming for some crazy reason that you were able to correctly intuit in the first place what concessions the future will demand that you had already made.
You constantly make the same choices as Sophie and Abraham, it's just less obvious to you that you're making them, less salient because it's not your child's life on the line. Not obviously at this very moment anyway.
Go meta, be clever.
I still don't see the point of writing obfuscated comments, though. If serving a possible future god is your cup of tea, it seems to me that making your LW comments more readable should help you in reaching that goal. If that demands sacrifice, Will, could you please make that sacrifice?