Annoyance comments on Formalizing Newcomb's - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (111)
I don't have problems with that. But Omega doesn't tell you "take one box to win". It only tells that if you'll take one box, it placed a million in it, and if you'll take two boxes, it didn't. It doesn't tell which decision you must take, the decision is yours.
The whole thing is a test ground for decision theories. If your decision theory outputs a decision that you think is not the right one, then you need to work some more on that decision theory, finding a way for it to compute the decisions you approve of.
Why shouldn't you adjust your criteria for approval until they fit the decision theory?
Why not adjust both until you get a million dollars?
I'm liking this preference for (Zen|Socratic) responses.