Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

AnnaSalamon comments on Humans are not automatically strategic - Less Wrong

153 Post author: AnnaSalamon 08 September 2010 07:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (266)

You are viewing a single comment's thread. Show more comments above.

Comment author: AnnaSalamon 12 September 2010 06:24:18PM *  25 points [-]

The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.

Hmm. The self-help / life hacking / personal development community may well be better than LW at focussing on practice, on concrete life-improvements, and on eliciting deep-seated motivation. But AFAICT these communities are not aiming at epistemic rationality in our sense, and are consequently not hitting it even as well as we are. LW, for all its faults, has had fair success at teaching folks how to thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). It has done so by teaching such subskills as:

  • Never attempting to prove empirical facts from definitions;
  • Never saying or implying “but decent people shouldn’t believe X, so X is false”;
  • Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
  • Asking what potential evidence would move you, or would move the other person;
  • Not expecting all sides of a policy discussion to line up;
  • Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.

By all means, let's copy the more effective, doing-oriented aspects of life hacking communities. But let’s do so while continuing to distinguish epistemic rationality as one of our key goals, since, as Steven notes, this goal seems almost unique to LW, is achieved here more than elsewhere, and is necessary for tackling e.g. existential risk reduction.