Bongo comments on An Outside View on Less Wrong's Advice - Less Wrong

60 Post author: Mass_Driver 07 July 2011 04:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (159)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 03 July 2011 02:17:47PM *  15 points [-]

I hold this suspicion with about 30% confidence, which is enough to worry me, since I mostly identify as a rationalist. What do you think about all this? How confident are you?

I think the recent surge in meetups shows that people are mainly interested to group with other people who think like them rather than rationality in and of itself. There is too much unjustified agreement here to convince me that people really mostly care about superior beliefs. Sure, the available methods might not allow much disagreement about their conclusions, but what about doubt in the very methods that are used to evaluate what to do?

Most of the posts on LW are not wrong, but many exhibit some sort of extraordinary idea. Those ideas seems mostly sound but if you take all of them together and arrive at something really weird, I think some skepticism is appropriate (at least more than can currently be found).


Here is an example:

1.) MWI

The many-worlds interpretation seems mostly justified, probably the rational choice of all available interpretations (except maybe Relational Quantum Mechanics). How to arrive at this conclusion is also a good exercise in refining the art of rationality.

2.) Belief in the Implied Invisible

P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X)

In other words, logical implications do not have to pay rent in future anticipations.

3.) Decision theory

Decision theory is an important field of research. We can learn a lot by studying it.

4.) Intelligence explosion

Arguments in favor of an intelligence explosion, made by people like I.J. Good, are food for thought and superficially sound. This line of reasoning should be taken seriously and further research should be conducted examining that possibility.


Each of those points (#1,2,3,4) are valuable and should be taken seriously. But once you build conjunctive arguments out of those points (1∧2∧3∧4) you should be careful about the overall credence of each point and the probability of their conjunction. Because even if all of them seem to provide valuable insights, any extraordinary conclusions that are implied by their conjunction might outweigh the benefit of each belief if the overall conclusion is just slightly wrong.

An example of where 1∧2∧3∧4 might lead:

"We have to take over the universe to save it by making the seed of an artificial general intelligence, that is undergoing explosive recursive self-improvement, extrapolate the coherent volition of humanity, while acausally trading with other superhuman intelligences across the multiverse."

or

"We should walk into death camps if it has no effect on the probability of being blackmailed."

Careful! The question is not if our results are sound but if the very methods we used to come up with those results are sufficiently trustworthy. This does not happen enough on LW, the methods are not examined, even though they lead to all kinds of problems like Pascal's Mugging or the 'The Infinitarian Challenge to Aggregative Ethics'. Neither are the motives and trustworthiness of the people who make those claims examined. Which wouldn't even be necessary if we were dealing with interested researchers rather than people who ask others to take their ideas seriously.

Comment author: Bongo 04 July 2011 03:13:11PM *  5 points [-]

I think "we should be skeptical of our very methods" is a fully general counterargument and "the probability of the conjunction of four things is less than the probability of any one of them" is true but weak, since the conjunction of (only!) four things that it's worth taking seriously is still worth taking seriously.

Also,

Neither are the motives and trustworthiness of the people who make those claims examined.

Seems just obviously false. They're examined all the time. (And none of these links are even to your posts!)

Yes, the conclusions seem weird. Yes, maybe we should be alarmed by that. But let's not rationalize the perception of weirdness as arising from technical considerations rather than social intuitions.

Comment author: XiXiDu 04 July 2011 04:16:31PM *  4 points [-]

Seems just obviously false. They're examined all the time. (And none of these links are even to your posts!)

You're right, I have to update my view there. When I started posting here I felt it was differently. It now seems that it has changed somewhat dramatically. I hope this trend continues without becoming itself unwarranted.

Although I disagree somewhat with the rest of your comment. I feel I am often misinterpreted when I say that we should be more careful of some of the extraordinary conclusions here. What I mean is not their weirdness but the scope of the consequences of being wrong about them. I have a very bad feeling about using the implied scope of the conclusions to outweigh their low probability. I feel we should put more weight to the consequences of our conclusions being wrong than being right. I can't justify this, but an example would be quantum suicide (ignore for the sake of the argument that there are other reasons that it is stupid than the possibility that MWI is wrong). I wouldn't commit quantum suicide even given a high confidence in MWI being true. Logical implications don't seem enough in some cases. Maybe I am simply biased, but I have been unable to overcome it yet.

Comment author: Will_Newsome 10 July 2011 04:03:13PM *  5 points [-]

I think your communication would really benefit from having a clear dichotomy between "beliefs about policy" and "beliefs about the world". All beliefs about optimal policy should be assumed incorrect, e.g. quantum suicide, donating to SIAI, or writing angry letters to physicists who are interested in creating lab universes. Humans go insane when they think about policy, and Less Wrong is not an exception. Your notion of "logical implication" seems to be trying to explain how one might feel justified in deriving political implications, but that totally doesn't work. I think if you really made this dichotomy explicit, and made explicit that you're worried about the infinite number of misguided policies that so naturally seem like they must follow true weird beliefs, and not so worried about the weird beliefs in and of themselves, then folk would understand your concerns a lot more easily and more progress could be made on setting up a culture that is more resistant to rampant political 'decision theoretic' insanity.

Comment author: NancyLebovitz 12 July 2011 03:01:35PM 1 point [-]

Is thinking about policy entirely avoidable, considering that people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?

Comment author: Humbug 12 July 2011 03:21:24PM 1 point [-]

...people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?

One example would be the policy not to talk about politics. Authoritarian regimes usually employ that policy, most just fail to frame it as rationality.

Comment author: Will_Newsome 13 July 2011 07:53:39PM -1 points [-]

No. But it is significantly more avoidable than commonly thought, and should largely be avoided for the first 3 years of hardcore rationality training. Or so the rules go in my should world.

Drawing a map of the territory is disjunctively impossible, coming up with a halfway sane policy based thereon is conjunctively impossible. Metaphorically.