DH1: Ad Hominem
It wasn't intended as a rebuttal; I have already provided that in another lengthy comment.
I was merely identifying TwistingFingers as a blatant troll. Just for fun:
Date: 26 September 2011
Many of us enjoy expressing ourselves through electronic games. As such, I feel that this aspect of our lives should be shared among our fellow gamers in the LessWrong community.
Juxtapose that with "Just do something: every moment you sit hundreds of thousands are dying and billions are suffering" written less than one month later.
Video games are a great way to reduce compartmentalization and learn real-world rationality skills.
Applause light/ more claims without evidence.
Indeed, what brings us together at LessWrong can often be our love of games; someone in the LessWrong community without this advantage might find learning rationality difficult [my italics].
An utterly ludicrous implication.
In this light, outreach into the transhumanist/rationalist community to promote gaming is low-hanging fruit for serving the future of humanity.
This sounds like Chomskybot applied to Lesswrong jargon.
Please consider this post a unique opportunity to begin discussion of this important issue and facilitate further debate in the near future.
Can you really not see that this guy is taking the Mickey?
Another plausible interpretation of TF's flip-flopping is that a month ago, xe was here because xe thought it was a fun community, and then xe got "converted" into an earnestly zealous and quite naive Singularitarian. Much of TF's vitriol, then, would implicitly target xer lackadaisical past self in order to (consciously or unconsciously) distance xer current self from the pre-conversion self.
Mind you, I'm not checking TF's history myself, so this might be a bad guess. I'm just pointing out a pretty plausible alternate hypothesis.
LessWrongers as a group are often accused of talking about rationality without putting it into practice (for an elaborated discussion of this see Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality). This behavior is particularly insidious because it is self-reinforcing: it will attract more armchair rationalists to LessWrong who will in turn reinforce the trend in an affective death spiral until LessWrong is a community of utilitarian apologists akin to the internet communities of anorexics who congratulate each other on their weight loss. It will be a community where instead of discussing practical ways to "overcome bias" (the original intent of the sequences) we discuss arcane decision theories, who gets to be in our CEV, and the most rational birthday presents (sound familiar?).
A recent attempt to counter this trend or at least make us feel better about it was a series of discussions on "leveling up": accomplishing a set of practical well-defined goals to increment your rationalist "level". It's hard to see how these goals fit into a long-term plan to achieve anything besides self-improvement for its own sake. Indeed, the article begins by priming us with a renaissance-man inspired quote and stands in stark contrast to articles emphasizing practical altruism such as "efficient charity"
So what's the solution? I don't know. However I can tell you a few things about the solution, whatever it may be:
Whatever you may decide to do, be sure it follows these principles. If none of your plans align with these guidelines then construct a new one, on the spot, immediately. Just do something: every moment you sit hundreds of thousands are dying and billions are suffering. Under your judgement your plan can self-modify in the future to overcome its flaws. Become an optimization process; shut up and calculate.
I declare Crocker's rules on the writing style of this post.