You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Jiro comments on We Should Introduce Ourselves Differently - Less Wrong Discussion

54 Post author: NancyLebovitz 18 May 2015 08:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jiro 21 May 2015 02:50:13PM 1 point [-]

"What they emphasize about themselves doesn't match their priorities" also sends a bad signal leading to loss of credibility.

This may fall in the "you can't polish a turd" category. Talking about the end of the world is inherently Bayseian evidence for crackpotness. Thinking of the problem as "we need to change how we present talking about the end of the world" can't help. Assuming you present it at all, anything you can do to change how you present can also be done by genuine crackpots, so changing how you present it should not affect what a rational listener thinks at all.

Comment author: John_Maxwell_IV 21 May 2015 10:04:01PM 0 points [-]

Disagree. By giving lots of evidence of non-crackpottishness before discussing the end of the world (having lots of intelligent discussion of biases etc.), then by the time someone sees discussion of the end of the world, their prior on LW being an intelligent community may be strong enough that they're not driven away.

Comment author: Jiro 21 May 2015 10:24:46PM 0 points [-]

Well, there's a whole range of crackpots, ranging from the flat-earthers who are obviously not using good reasoning to anyone who reads a few paragraphs, to groups who sound logical and erudite as long as you don't have expertise in the subject they're talking about. Insofar as LW is confused with (or actually is) some kind of crackpots, it's crackpots more towards the latter end of the scale.

Comment author: John_Maxwell_IV 21 May 2015 11:21:34PM *  0 points [-]

Sure. And insofar as it's easy for us, we should do our best to avoid being classified as crackpots of the first type :)

Avoiding classification as crackpots of the second type seems harder. The main thing seems to be having lots of high status, respectable people agree with the things you say. Nick Bostrom (Oxford professor) and Elon Musk (billionaire tech entrepreneur) seem to have done more for the credibility of AI risk than any object-level argument could, for instance.