You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on Open Thread, January 4-10, 2016 - Less Wrong Discussion

5 Post author: polymathwannabe 04 January 2016 01:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (430)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 06 January 2016 05:52:03PM 2 points [-]

MIRI survives in part via donations from people who bought the party line on stuff like MWI.

Are you saying that based on having looked at the data? I think we should have a census that has numbers about donations for MIRI and belief in MWI.

Comment author: Vaniver 06 January 2016 09:24:25PM 1 point [-]

Really, you would want MWI belief delta (to before they found LW) to measure "bought the party line."

Comment author: IlyaShpitser 07 January 2016 12:05:40AM 1 point [-]

I am not trying to emphasize MWI specifically, it's the whole set of tribal markers together.

Comment author: bogus 07 January 2016 04:07:54AM *  2 points [-]

If there is a tribal marker, it's not MWI persay; it's choosing an interpretation of QM on grounds of explanatory parsimony. Eliezer clearly believed that MWI is the only interpretation of QM that qualifies on such grounds. However, such a belief is quite simply misguided; it ignores several other formulations, including e.g. relational quantum mechanics, the ensemble interpretation, the transactional interpretation, etc. that are also remarkable for their overall parsimony. Someone who advocated for one of these other approaches would be just as recognizable as a member of the rationalist 'tribe'.

Comment author: Clarity 07 January 2016 05:48:56AM *  0 points [-]

choosing an interpretation of QM on grounds of explanatory parsimony.

  • contested the strength of the MW claim. Explanatory parsimony doesn't differentiate a strong from a weak claim

OP's original claim:

Why does E. Yudkowsky voice such strong priors e.g. wrt. the laws of physics (many worlds interpretation), when much weaker priors seem sufficient for most of his beliefs (e.g. weak computationalism/computational monism) and wouldn't make him so vulnerable? (With vulnerable I mean that his work often gets ripped apart as cultish pseudoscience.)