Wei_Dai comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong

75 Post author: HoldenKarnofsky 18 August 2011 11:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

Sort By: Popular

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 18 August 2011 10:07:26PM *  4 points [-]

I see this post as suggesting a way to better approximate Bayesian rationality in practice (since full Bayesian rationality is known to be infeasible), and as such we can't require that agents implementing such an approximation not exhibit preference reversals.

What we can ask for, I think, is more or better justifications for the "design choices" in the approximation method.

Comment author: cousin_it 18 August 2011 11:07:40PM *  1 point [-]

Nooo. If someone wants to improve their Bayesian correctness, they will avoid "approximation methods" that advise against the clearly beneficial action of giving money to the mugger. I read the post as proposing a new ideal, not a new approximation.

Comment author: Strange7 19 August 2011 12:17:13AM 2 points [-]

clearly beneficial action of giving money to the mugger.

If someone threatened to blow up the world with their magic powers unless I gave them a dollar, I'd say a better guess at the right thing to do would not be to pay them, but rather to kill them outright as quickly as possible. In the unlikely event that the mugger actually has such powers, I've just saved the world from an evil wizard with a very poor grasp of economics; otherwise, it's a valuable how-not-to story for the remaining con-artists and/or crazy people.

Comment author: Will_Newsome 19 August 2011 04:14:14PM *  5 points [-]

This is classic. "Should I give him a dollar, or kill him as quickly as possible? [insert abstract consequentialist reasoning.] So I think I'll kill him as quickly as possible."

Comment author: [deleted] 19 August 2011 04:24:47PM 2 points [-]

I'm against killing crazy people, since I'm generally against killing people and other crazy people are not likely to be deterred.

Comment author: Strange7 19 August 2011 10:10:47PM 1 point [-]

I'm not saying it's the best possible solution.

The mugger might also be a con artist, seeking to profit from fraudulent claims, in which case I have turned the dilemma on it's head: anyone considering a similar scam in the future would have to weigh the small chance of vast disutility associated with provoking the same reaction again.

Comment author: [deleted] 20 August 2011 02:27:26PM 0 points [-]

Why not go with "Give him the dollar, then investigate further and take appropriate actions"? I think the mugger is more likely to be a crazy person than an incompetent con artist, and much, much likelier to be either than an evil wizard, so rather than not perfect, I would call your solution cruel-- most of the time you'll end up killing someone for being mentally ill. I guess I can understand why you think there ought to be a harsh punishment for threatening unimaginable skillions of people, but don't you still have that option if it turns out the mugger is reasonably sane?

Comment author: Strange7 20 August 2011 02:35:31PM 1 point [-]

In more realistic circumstances, yes, I would most likely respond to someone attempting to extort trivial concessions with grandiose and/or incoherent threats by stalling for time and contacting the appropriate authorities.

Comment author: [deleted] 20 August 2011 02:41:28PM 0 points [-]

But... isn't that what we're talking about? Did I miss some detail about the mugging that makes it impossible in real life, or something? In what way are the circumstances unrealistic?

Or do you mean you were just playing, and not seriously proposing this solution at all?

Comment author: Strange7 20 August 2011 03:15:43PM 2 points [-]

Scenarios involving arbitrarily powerful agents dicking around with us mere mortals in solvable ways always seem unrealistic to me.

Comment author: Wei_Dai 19 August 2011 03:18:07AM 1 point [-]

I'm surprised and confused by your comment. People have proposed lots of arguments against giving in to Pascal's Mugging, even within the standard Bayesian framework. (The simplest, for example, is that we actually have a bounded utility function.) I don't see how you could possibly say at this point that giving in is "clearly" beneficial.

Comment author: cousin_it 19 August 2011 09:17:07AM *  1 point [-]

Err, the post assumes that we have an unbounded utility function (or at least that it can reach high enough for Pascal's mugging to become relevant), and then goes on to propose what you call an approximation method that looks clearly wrong for that case.

Comment author: multifoliaterose 19 August 2011 04:02:00PM 0 points [-]

Why clearly wrong?