IlyaShpitser comments on Open Thread, January 4-10, 2016 - Less Wrong

5 Post author: polymathwannabe 04 January 2016 01:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (430)

You are viewing a single comment's thread. Show more comments above.

Comment author: IlyaShpitser 06 January 2016 05:18:18PM 3 points [-]

Consider that if stuff someone says resonates with you, that someone is optimizing for that.

Comment author: Lumifer 06 January 2016 05:27:37PM *  2 points [-]

There are two quite different scenarios here.

In scenario 1 that someone knows me beforehand and optimizes what he says to influence me.

In scenario 2 that someone doesn't know who will respond, but is optimizing his message to attract specific kinds of people.

The former scenario is a bit worrisome -- it's manipulation. But the latter one looks fairly benign to me -- how else would you attract people with a particular set of features? Of course the message is, in some sense, bait but unless it's poisoned that shouldn't be a big problem.

Comment author: IlyaShpitser 06 January 2016 05:30:21PM 0 points [-]

MIRI survives in part via donations from people who bought the party line on stuff like MWI.

Comment author: Lumifer 06 January 2016 05:44:40PM *  2 points [-]

A fair point. Maybe I'm committing the typical mind fallacy and underestimating the general gullibility of people. If someone offers you something, it's obvious to me that you should look for strings, consider the incentives of the giver, and ponder the consequences (including those concerning your mind). If you don't understand why something is given to you, it's probably wise to delay grabbing the cheese (or not touching it) until you understand.

And still this all looks to me like a plain-vanilla example of a bootstrapping an organization and creating a base of support, financial and otherwise, for it. Unless you think there were lies, misdirections, or particularly egregious sins of omission, that's just how the world operates.

Comment author: RichardKennaway 07 January 2016 08:30:10AM 1 point [-]

Also, anyone who succeeds in attracting people to an enterprise, be it by the most impeccable of means, will find the people they have assembled creating tribal markers anyway. The leader doesn't have to give out funny hats. People will invent their own.

Comment author: IlyaShpitser 07 January 2016 04:07:40PM *  1 point [-]

People do a lot of things. Have biases, for example.

There is quite a bit of our evolutionary legacy it would be wise to deemphasize. Not like there aren't successful examples of people doing good work in common and not being a tribe.


edit: I think what's going on is a lot of the rationalist tribe folks are on the spectrum and/or "nerdy", and thus have a more difficult time forming communities, and LW/etc was a great way for them to get something important in their life. They find it valuable and rightly so. They don't want to give it up.

I am sympathetic to this, but I think it would be wise to separate the community aspects and rationality itself as a "serious business." Like, I am friends with lots of academics, but the academic part of our relationship has to be kept separate (I would rip into their papers in peer review, etc.) The guru/disciple dynamic I think is super unhealthy.

Comment author: ChristianKl 06 January 2016 05:52:03PM 2 points [-]

MIRI survives in part via donations from people who bought the party line on stuff like MWI.

Are you saying that based on having looked at the data? I think we should have a census that has numbers about donations for MIRI and belief in MWI.

Comment author: Vaniver 06 January 2016 09:24:25PM 1 point [-]

Really, you would want MWI belief delta (to before they found LW) to measure "bought the party line."

Comment author: IlyaShpitser 07 January 2016 12:05:40AM 1 point [-]

I am not trying to emphasize MWI specifically, it's the whole set of tribal markers together.

Comment author: bogus 07 January 2016 04:07:54AM *  2 points [-]

If there is a tribal marker, it's not MWI persay; it's choosing an interpretation of QM on grounds of explanatory parsimony. Eliezer clearly believed that MWI is the only interpretation of QM that qualifies on such grounds. However, such a belief is quite simply misguided; it ignores several other formulations, including e.g. relational quantum mechanics, the ensemble interpretation, the transactional interpretation, etc. that are also remarkable for their overall parsimony. Someone who advocated for one of these other approaches would be just as recognizable as a member of the rationalist 'tribe'.

Comment author: Clarity 07 January 2016 05:48:56AM *  0 points [-]

choosing an interpretation of QM on grounds of explanatory parsimony.

  • contested the strength of the MW claim. Explanatory parsimony doesn't differentiate a strong from a weak claim

OP's original claim:

Why does E. Yudkowsky voice such strong priors e.g. wrt. the laws of physics (many worlds interpretation), when much weaker priors seem sufficient for most of his beliefs (e.g. weak computationalism/computational monism) and wouldn't make him so vulnerable? (With vulnerable I mean that his work often gets ripped apart as cultish pseudoscience.)

Comment author: Dagon 03 February 2016 05:59:10PM 0 points [-]

I don't know why scenario 2 should be any less worrisome. The distinction between "optimized for some perception/subset of you" and "optimized for someone like you" is completely meaningless.

Comment author: Lumifer 03 February 2016 06:08:50PM 0 points [-]

Because of degree of focus. It's like the distinction between a black-hat scanning the entire 'net for vulnerabilities and a black-hat scanning specifically your system for vulnerabilities. Are the two equally worrisome?

Comment author: Dagon 04 February 2016 01:33:56AM 0 points [-]

equally worrisome, conditional on me having the vulnerability the blackhat is trying to use. This is equivalent to the original warning being conditional on something resonating with you.