Ixiel comments on Open thread, 7-14 July 2014 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (232)
I spoke with someone recently who asserted that they would prefer an 100% chance of getting a dollar, than a 99% chance of getting $1,000,000. Now, I don't think that they would actually do this if the situation was real, i.e. if they had $1,000,000 and there was a 1 in 100 chance that it would be lost, they wouldn't pay someone $999,999 to do away with that probability and therefore guarantee them the $1, but they think they would do that. I'm interested in what could cause someone to think that. I actually have a little more information upon asking a few more questions, but I'd like to see what others think without knowing the answer.
My own thoughts:This may be related to the Allais paradox. It also trivially implies two-boxing in Newcomb.
Some more questions raised:
What arguments might I make to change this person's mind?
Would it be ethical, if I had to make this choice for them, to choose the $1,000,000? What about an AI making choices for a human with this utility function?
Writing it backward, I thinks you just did.
As for the ethics, if you already were in a position to HAVE to make the decision, you should do what you think is right regardless of any of their prior opinions. If, however, you just had the opportunity to override them, I thinks you should limit yourself to persuading as many of them as you can, but not override them for their own benefits.