Houshalter comments on Open Thread, Apr. 20 - Apr. 26, 2015 - Less Wrong

3 Post author: Gondolinian 20 April 2015 12:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (350)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 21 April 2015 09:24:48PM 7 points [-]

I've come up with an interesting thought experiment I call oracle mugging.

An oracle comes up to you and tells you that either you will give them a thousand dollars or you will die in the next week. They refuse to tell you which. They have done this many times, and everyone has either given them money or died. The oracle isn't threatening you. They just go around and find people who will either give them money or die in the near future, and tell them that.

Should you pay the oracle? Why or why not?

Comment author: Houshalter 25 April 2015 10:35:39AM 0 points [-]

I really want to say that you should pay. Obviously you should precommit to not paying if you can, and then the oracle will never visit you to begin with unless you are about to die anyway. But if you can't do that, and the oracle shows up at your door, you have a choice to pay and live or not pay and die.

Again, obviously it's better to not pay and then you never end up in this situation in the first place. But when it actually happens and you have to sit down and choose between paying it to go away or dying, I would choose to pay it.

It's all well and good to say that some decision theory results in optimal outcomes. It's another to actually implement it in yourself. To make sure every counter factual version of yourself makes the globally optimal choice, even if there is a huge cost to some of them.

Comment author: tut 25 April 2015 12:57:42PM 0 points [-]

The traditional LW solution to this is that you precommit once and for all to this: Whenever I find myself in a situation where I wish that I had committed to acting in accordance with a rule R I will act in accordance with R.

Comment author: Houshalter 25 April 2015 08:06:10PM *  1 point [-]

That's great to say, but much harder to actually do.

For example, if Omega pays $1,000 to people or asks them to commit suicide. But it only asks people it knows100% will not do it, otherwise it gives them the money.

The best strategy is to precommit to suicide if Omega asks. But if Omega does ask, I doubt most lesswrongers would actually go through with it.

Comment author: Kindly 25 April 2015 09:25:30PM 1 point [-]

So the standard formulation of a Newcomb-like paradox continues to work if you assume that Omega has a merely 99% accuracy.

Your formulation, however, doesn't work that way. If you precommit to suicide when Omega asks, but Omega is sometimes wrong, then you commit suicide with 1% probability (in exchange for having $990 expected winnings). If you don't precommit, then with a 1% chance you might get $1000 for free. In most cases, the second option is better.

Thus, the suicide strategy requires very strong faith in Omega, which is hard to imagine in practice. Even if Omega actually is infallible, it's hard to imagine evidence extraordinary enough to convince us that Omega is sufficiently infallible.

(I think I am willing to bite the suicide bullet as long as we're clear that I would require truly extraordinary evidence.)

Comment author: Houshalter 26 April 2015 05:07:37AM -1 points [-]

Please Don't Fight the Hypothetical. I agree with you if you are only 99% sure, but the premise is that you know Omega is right with certainty. Obviously that is implausible, but so is the entire situation with an omniscient being asking people to commit suicide, or oracles that can predict if you will die.

But if you like you can have a lesser cost, like Omega asking you to pay $10,000. Or some amount of money significant enough to seriously consider just giving away.

Comment author: Kindly 26 April 2015 05:42:26AM 2 points [-]

I did say what I would do, given the premise that I know Omega is right with certainty. Perhaps I was insufficiently clear about this?

I am not trying to fight the hypothetical, I am trying to explain why one's intuition cannot resist fighting it. This makes the answer I give seem unintuitive.