cousin_it comments on Tendencies in reflective equilibrium - LessWrong

27 Post author: Yvain 20 July 2011 10:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (69)

You are viewing a single comment's thread. Show more comments above.

Comment author: MixedNuts 20 July 2011 11:32:33AM 10 points [-]

I have not yet accepted that consistency is always the best course in every situation. For example, in Pascal's Mugging, a random person threatens to take away a zillion units of utility if you don't pay them $5. The probability they can make good on their threat is miniscule, but by multiplying out by the size of the threat, it still ought to motivate you to give the money. Some belief has to give - the belief that multiplication works, the belief that I shouldn't pay the money, or the belief that I should be consistent all the time - and right now, consistency seems like the weakest link in the chain.

Not upvoted, for this paragraph. You can't become right by removing beliefs at random until the remaining belief pool is consistent, but if you're right then you must be consistent.

Why does some belief have to give, if you reject consistency? If you're going to be inconsistent, why not inconsistently be consistent as well?

Also, you are attempting to be humorous by including beliefs like "multiplication works", but not beliefs like "at the 3^^^3rd murder, I'm still horrified" or "Solomonoff induction works", right?

We are but humble bounded rationalists, who have to use heuritistic soup, so we might have to be inconsistent at times. But to say that even after careful recomputation on perfectly formalized toy problems, we don't have to be consistent? Oh, come on!

Comment author: cousin_it 20 July 2011 01:00:52PM *  3 points [-]

Agreed.

Here's an idea that just occurred to me: you could replace Solomonoff induction with a more arbitrary prior (interpreted as "degree of caring" like Wei Dai suggests) and hand-tune your degree of caring for huge/unfair universes so Pascal's mugging stops working. Informally, you could value your money more in universes that don't contain omnipotent muggers. This approach still feels unsatisfactory, but I don't remember it suggested before...

Comment author: Nisan 21 July 2011 01:14:44AM 1 point [-]

Is this different from jimrandomh's proposal to penalize the prior probability of events of utility of large magnitude, or komponisto's proposal to penalize the utility?