LessWrongers as a group are often accused of talking about rationality without putting it into practice (for an elaborated discussion of this see Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality). This behavior is particularly insidious because it is self-reinforcing: it will attract more armchair rationalists to LessWrong who will in turn reinforce the trend in an affective death spiral until LessWrong is a community of utilitarian apologists akin to the internet communities of anorexics who congratulate each other on their weight loss. It will be a community where instead of discussing practical ways to "overcome bias" (the original intent of the sequences) we discuss arcane decision theories, who gets to be in our CEV, and the most rational birthday presents (sound familiar?).

A recent attempt to counter this trend or at least make us feel better about it was a series of discussions on "leveling up": accomplishing a set of practical well-defined goals to increment your rationalist "level". It's hard to see how these goals fit into a long-term plan to achieve anything besides self-improvement for its own sake. Indeed, the article begins by priming us with a renaissance-man inspired quote and stands in stark contrast to articles emphasizing practical altruism such as "efficient charity"

So what's the solution? I don't know. However I can tell you a few things about the solution, whatever it may be:

  • It wont feel like the right thing to do; your moral intuitions (being designed to operate in a small community of hunter gatherers) are unlikely to suggest to you anything near the optimal task.
  • It will be something you can start working on right now, immediately.
  • It will disregard arbitrary self-limitations like abstaining from politics or keeping yourself aligned with a community of family and friends.
  • Speaking about it would undermine your reputation through signaling. A true rationalist has no need for humility, sentimental empathy, or the absurdity heuristic.

Whatever you may decide to do, be sure it follows these principles. If none of your plans align with these guidelines then construct a new one, on the spot, immediately. Just do something: every moment you sit hundreds of thousands are dying and billions are suffering. Under your judgement your plan can self-modify in the future to overcome its flaws. Become an optimization process; shut up and calculate.

I declare Crocker's rules on the writing style of this post.

Practicing what you preach
New Comment
298 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It is probably simply structural that the LessWrong community tends to be about armchair philosophy, science, and math. If there are people who have read through Less Wrong, absorbed its worldview, and gone out to "just do something", then they probably aren't spending their time bragging about it here. If it looks like no one here is doing any useful work, that could really just be sampling bias.

Even still, I expect that most posters here are more interested to read, learn, and chat than to thoroughly change who they are and what they do. Reading, learning, and chatting is fun! Thorough self-modification is scary.

Thorough and rapid self-modification, on the basis of things you've read on a website rather than things you've seen tested and proven in combination, is downright dangerous. Try things, but try them gradually.

And now, refutation!

So what's the solution?

To, um, what, exactly? I think the question whose solution you're describing is "What ought one do?" Of these, you say:

It wont feel like the right thing to do; your moral intuitions (being designed to operate in a small community of hunter gatherers) are unlikely to suggest to you anything near t

... (read more)

Speaking about it would undermine your reputation through signaling. A true rationalist has no need for humility, >sentimental empathy, or the absurdity heuristic.

Depending on your goal (rationality is always dependend on a goal, after all), I might disagree. Rational behaviour is whatever makes you win. If you view your endeveaur as a purely theoretical undertaking, I agree, but if you consider reality as a whole you have to take into account how your behaviour comes across. There are many forms of behaviour that would be rational but would make you look like an ass if you don't at least take your time to explain the reasons for your behaviour to those that can affect your everyday life.

0Logos01
Rational behavior is whatever conforms to the principles of reason. Instrumentally rational behavior is whatever is the most rational behavior that achieves the expected agenda. You could call that latter form "winning" but that's an error, in my opinion. It seems related to the notion that since "winning" makes you "feel good", ultimately all agendas are hedonistic. It screams "fake utility function" to me. Sometimes there isn't a path to optimization; only to mitigation of anti-utility.
4pedanterrific
If some particular ritual of cognition—even one that you have long cherished as "rational"—systematically gives poorer results relative to some alternative, it is not rational to cling to it. The rational algorithm is to do what works, to get the actual answer—in short, to win, whatever the method, whatever the means.
-6Logos01

If nothing else, the assertion that the right and rational think will not feel like the right thing to do really needs support. Our moral intuitions may not be perfect, but there are definite parallels between small communities of hunter-gatherers and modern society, that make a fair portion of those intuitions applicable presently.

That it was just laid out without even a reference to back it up… come on, here.

[-][anonymous]70

I checked TwistingFingers's post history, and I noticed that he is also the author of a post entitled LessWrong gaming community.

Choice quote: "Many of us enjoy expressing ourselves through electronic games."

Quite how this squares with his aspiration to become an optimization process is beyond me. Optimizing for lulz, maybe.

This is DH1.

(I also see the OP as more signal than noise. But the norm for rebuttal here should usually be DH4 or higher.)

9wedrifid
Not everything need be a rebuttal. Incidentally, people constrained to DH4 or higher are gameable by common social practice.
0fiddlemath
Certainly, not every reply needs to be a rebuttal. But it usually is, here. On the other hand, if you're going to rebut, and you think the other party is trying to argue honestly, your lower bound really should be around DH4 (counterargument) in a setting with many speakers. In a private setting, simply disagreeing (DH3) can be useful to just explain internal state. "I disagree with X, but I'm not sure why. Hm..." But it's logically rude to state simple disagreement as if it were an actual argument. :)
2FiftyTwo
(I rather like this system of using DH shorthands for diagnosing the problems with peoples comments. Possibly we can develop similar systems for other logical issues.)
2[anonymous]
It wasn't intended as a rebuttal; I have already provided that in another lengthy comment. I was merely identifying TwistingFingers as a blatant troll. Just for fun: Juxtapose that with "Just do something: every moment you sit hundreds of thousands are dying and billions are suffering" written less than one month later. Applause light/ more claims without evidence. An utterly ludicrous implication. This sounds like Chomskybot applied to Lesswrong jargon. Can you really not see that this guy is taking the Mickey?
6orthonormal
Another plausible interpretation of TF's flip-flopping is that a month ago, xe was here because xe thought it was a fun community, and then xe got "converted" into an earnestly zealous and quite naive Singularitarian. Much of TF's vitriol, then, would implicitly target xer lackadaisical past self in order to (consciously or unconsciously) distance xer current self from the pre-conversion self. Mind you, I'm not checking TF's history myself, so this might be a bad guess. I'm just pointing out a pretty plausible alternate hypothesis.
6Prismattic
I realize that this a trivial issue, but if you care about inferential distance, I thought you should know that I had to look this expression up, and I suspect a lot of other non-UK readers would as well.
3ahartell
For those who don't know, Urban Dictionary says that "taking the Mickey" means "joking, or doing something without intent".
0[anonymous]
Even Yudkowsky says he disagrees with much of his earlier writing. I have been so transformed by reading the sequences that I have made that much progress in so little time.
[-][anonymous]50

LessWrongers as a group are often accused of talking about rationality without putting it into practice

Evidence? Who accuses them of this? One post (on Less Wrong itself!) is not evidence enough for this claim.

who gets to be in our CEV

Since this barb is directed at me, I should respond. When I come across a superb intellect like Yudkowsky, I first shut up and read the bulk of what he has to say (in Yudkowsky's case, this is helpfully packaged in the sequences). Then I apply my modest intellect to exploring the areas of his thinking that I do not fin... (read more)

3TheOtherDave
For both subjects, if discussing them doesn't make someone better able to do something worth doing, then discussing it is not worthwhile. If it does make someone better able to do something worth doing, discussing it might be worthwhile. It seems plausible to me that my reading, writing, and thinking about cognitive biases can noticeably help improve my understanding of, and ability to recognize, such biases. It seems plausible to me that such improvement can help me better achieve my goals. Ditto for other people. So I conclude that such discussion might be worthwhile. It doesn't seem plausible to me that my reading, writing and thinking about CEV can noticeably help improve anyone's ability to do anything.
-6[anonymous]

I declare Crocker's rules on the writing style of this post.

Okay then:

in an affective death spiral

I think a more appropriate buzzword might be evaporative cooling of group beliefs. It's not immediately clear how "armchair rationalists" would be more predisposed to affective death spirals than instrumental rationalists.

altruism such as "efficient charity"

altruism such as Efficient Charity. (Note the period.)

It wont feel

It won't feel

being designed to

having evolved to

0Logos01
If you take it as axiomatic that instrumental rationalists are putting labor and effort into the material manifestations of instrumental rationality whereas 'armchair' rationalists merely discuss these ideas, then it becomes a necessity that the former be 'more rational' than the latter. And moreover, relegating the topic to a point of discourse without instantiation can be a form of affective death spiral. Not that I necessarily agree with anything else in this post or thread -- just commenting on that point.

This behavior is particularly insidious because it is self-reinforcing

As I understand your post, the behavior you mean is talking about rationality without putting it into practice. But the way it is written sound to me like you mean accusing LW oftalking about rationality without putting it into practice.

A recent attempt to counter this trend or at least make us feel better about it was a series of discussions on "leveling up": [...] stands in stark contrast to articles emphasizing practical altruism such as "efficient charity"

... (read more)
[-]gjm100

I think you are doing it wrong.

My reading of TwistingFingers's words was that s/he did mean "please feel free to be harsh about me", not "I wish to be free to be harsh about others". I don't see what other interpretation is possible, given "on the writing style of this post".

4ahartell
I think your interpretation is correct, and that's how I interpreted it, but I can understand Bobertron's interpretation as well. He thought TwistingFingers was declaring Crocker's rules as a sort of apology for the accusatory "writing style of [the] post", which would as Bobertron suggests be using the declaration in the wrong direction. I only say this because you wrote:
[-][anonymous]20

Speaking about it would undermine your reputation through signaling. A true rationalist has no need for humility, sentimental empathy, or the absurdity heuristic.

Under your judgement your plan can self-modify in the future to overcome its flaws. Become an optimization process; shut up and calculate.

Considering the problems you bring up, I think Less Wrong may benefit from increased categorization of thought by adding new levels other than Main and Discussion. And considering your advice, I'll try not to be overly humble/nice about it.

How wide of a net d... (read more)

Meta-comment: most replies at time of posting seem to be questioning whether a problem exists and quibbling with the style of the post, rather than proposing solutions. This doesn't seem like a good sign.

Proposed solution: If we consider rationality as using the best methods to achieve your goals (whatever they may be) then there are direct ways the Less Wrong material can help.

Firstly, define you goals and be sure that they are truly your goals.

Secondly when pursuing your goals retrieve information as needed that helps you make better decisions and hence... (read more)

2[anonymous]
Objection: it is highly irrational to propose solutions to non-existent problems. Insofar as someone considers the OP to have failed to raise a genuine problem, there is every reason for them not to start proposing solutions. Furthermore, as another commenter has pointed it is an act of generosity to interpret him as having coherently stated any particular problem at all.
2FiftyTwo
My interpretation of the original post was that they were identifying the problem that LW posters are 'talking about rationality without putting it into practice .' I then attempted to give an example of how one could instrumentally use the rationality techniques discussed on the site to achieve ones goals. Whether or not it is the case that LW is failing to apply rationality techniques enough is an empirical question that I agree the OP hasn't proven. However whether or not it is the case demonstrations of how instrumental rationality might work still seem to be a useful exercise. My top comment was semi-flippantly pointing out that commenters are doing what the OP accused them of by discussing the post rather than what seems the more useful task of proposing solutions. Possibly I am interpreting the OP generously in the problem they are presenting, but I don't understand why this is a bad thing. When meaning is uncertain surely it is best to assume the most creditable interpretation in order to move discussion forward? (And contributes to general norms of politeness.)

I don't really understand what the problem you're diagnosing is supposed to be or what it is you're asking for.

First silly thing coming to mind: "Use rationality to determine an end goal, and a rational authority to trust. Then condition yourself to follow both blindly and without exception.Then stop caring about if you're being rational or not. "

Yea, it's silly. No, I'm not endorsing it or even saying it's any less silly than it sounds. But it DOES fulfil your criteria.

A true rationalist has no need for humility, sentimental empathy, or the absurdity heuristic.

Surely if those things go against the Grand Maximally Efficient Thing To Do, they should be shed away. But in general, if they are not an obstacle, they make our life a little more pleasant. Ah, but a true human rationalist can really do without humility, sentimental empathy or the absurdity heuristic? Are those things something humans can do without, if they want to?

And more: how do you know that the Solution is the correct Solution?

[-][anonymous]00

I'm aware of the typos. I am not allowed to edit for 6 more minutes. Please don't respond to this thread as I will delete it once I have edited the OP.

[This comment is no longer endorsed by its author]Reply