MixedNuts comments on Tendencies in reflective equilibrium - Less Wrong

27 Post author: Yvain 20 July 2011 10:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (69)

You are viewing a single comment's thread.

Comment author: MixedNuts 20 July 2011 11:32:33AM 10 points [-]

I have not yet accepted that consistency is always the best course in every situation. For example, in Pascal's Mugging, a random person threatens to take away a zillion units of utility if you don't pay them $5. The probability they can make good on their threat is miniscule, but by multiplying out by the size of the threat, it still ought to motivate you to give the money. Some belief has to give - the belief that multiplication works, the belief that I shouldn't pay the money, or the belief that I should be consistent all the time - and right now, consistency seems like the weakest link in the chain.

Not upvoted, for this paragraph. You can't become right by removing beliefs at random until the remaining belief pool is consistent, but if you're right then you must be consistent.

Why does some belief have to give, if you reject consistency? If you're going to be inconsistent, why not inconsistently be consistent as well?

Also, you are attempting to be humorous by including beliefs like "multiplication works", but not beliefs like "at the 3^^^3rd murder, I'm still horrified" or "Solomonoff induction works", right?

We are but humble bounded rationalists, who have to use heuritistic soup, so we might have to be inconsistent at times. But to say that even after careful recomputation on perfectly formalized toy problems, we don't have to be consistent? Oh, come on!

Comment author: cousin_it 20 July 2011 01:00:52PM *  3 points [-]

Agreed.

Here's an idea that just occurred to me: you could replace Solomonoff induction with a more arbitrary prior (interpreted as "degree of caring" like Wei Dai suggests) and hand-tune your degree of caring for huge/unfair universes so Pascal's mugging stops working. Informally, you could value your money more in universes that don't contain omnipotent muggers. This approach still feels unsatisfactory, but I don't remember it suggested before...

Comment author: Nisan 21 July 2011 01:14:44AM 1 point [-]

Is this different from jimrandomh's proposal to penalize the prior probability of events of utility of large magnitude, or komponisto's proposal to penalize the utility?

Comment author: Spurlock 20 July 2011 12:50:53PM *  2 points [-]

For weakest implicit belief, I think I would have nominated "That I have the slightest idea how to properly calculate the probability of the mugger following through on his/her threat".

Also, Torture vs. Specks seems like another instance where many of us are willing to sacrifice apparent consistency. Most coherent formulations of utilitarianism must choose torture, yet many utilitarians are hesitant to do so.

In both cases, it seems like what we're doing isn't abandoning consistency, but admitting to the possibility that our consistent formula (e.g. naive utilitarianism) isn't necessarily the optimal / subjectively best / most reflectively equilibrial one. We therefore may choose to abandon it in favor of the intuitive answer (don't pay the mugger, choose specks, etc), not because we choose to be inconsistent, but because we predict the existence of a Better But Still Consistent Formula not yet known to us.

Of course, as Yvain notes, we can take pretty much any set of arbitrary preferences and create a "consistent" formula by adding enough terms to the equation. The difference is that the Better But Unknown formula above is both consistent and something we'd be in reflective equilibrium about.

Comment author: TrE 20 July 2011 07:14:49PM 1 point [-]

By "Dust vs. Specks" you surely mean "torture vs. dust specks", and with "Specks", you want to say "torture", don't you?

Comment author: Spurlock 20 July 2011 07:27:29PM *  0 points [-]

Fixed thanks. But no, I meant specks. It seems like utilitarianism (as opposed to just typical intuitive morality) commands you to inflict Torture. You only want to choose specks because your brain doesn't multiply properly, etc.

Of course, not everyone agrees that Utilitarianism picks Torture, but the argument for Torture is certainly a utilitarian one. So in this case picking Specks anyway seems like a case of overriding (at least naive versions of) utilitarianism.

Comment author: TrE 21 July 2011 09:32:19AM *  0 points [-]

Wait...

Most coherent formulations of utilitarianism must choose specks, yet many utilitarians are hesitant to do so.

Are you sure that should be specks? If so, I am confused.

Comment author: Spurlock 21 July 2011 11:45:57AM 1 point [-]

Wow. Sorry, you're obviously right. Brain totally misfired on me I guess.