Annoyance comments on Formalizing Newcomb's - Less Wrong

18 Post author: cousin_it 05 April 2009 03:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (111)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 05 April 2009 07:12:38PM *  1 point [-]

Okay, my approximation: when confronted with a huge powerful agent that has a track record of 100% truth, believe it. I one-box and win. Who are you to tell me my approximation is bad?

I don't have problems with that. But Omega doesn't tell you "take one box to win". It only tells that if you'll take one box, it placed a million in it, and if you'll take two boxes, it didn't. It doesn't tell which decision you must take, the decision is yours.

The whole thing is a test ground for decision theories. If your decision theory outputs a decision that you think is not the right one, then you need to work some more on that decision theory, finding a way for it to compute the decisions you approve of.

Comment author: Annoyance 05 April 2009 07:31:25PM 1 point [-]

Why shouldn't you adjust your criteria for approval until they fit the decision theory?

Comment author: Eliezer_Yudkowsky 06 April 2009 11:52:11AM 3 points [-]

Why not adjust both until you get a million dollars?

Comment author: thomblake 07 April 2009 02:55:17PM 1 point [-]

I'm liking this preference for (Zen|Socratic) responses.