I went looking around on wikipedia and found Kavka's toxin puzzle which seems to be about "you can get a billion dollars if you intend to drink this poison (which will hurt a lot for a whole day similar to the worst torture imaginable but otherwise leave no lasting effects) tomorrow evening, but I'll pay you tonight"... but there I don't get the paradox either - whats stopping you from creating a sub agent (informing a friend) with the task of convincing you not to drink AFTER you've gotten the money? ... Possibly by force. Possibly by relying on saying things in a manner that you don't know that he knows he has to do this. Possibly with a whole lot of actors. Like scheduling a text "I am perfectly fine, there is nothing wrong with me" to parents and friends to be sent tomorrow morning.
Of course, this relies on my ability to raise the probability of intervention, but that seems like an easier challenge than engaging in willful doublethink... ... or you'd perhaps add various chemicals to your food the next day - I know I can be committed to an idea (I will do this task tonight), come home, eat dinner, and then I'd be totally uncommitted (that task can wait, I will play games first).
... A billion is a lot of money, perhaps I'd drink the poison and then have a hired person drug me to a coma, to be awoken the next day? You could hire a lot of medical staff with that kind of money.
Yet I get the feeling that all these "creative" solutions are not really allowed. Why is that?
all these "creative" solutions are not really allowed. Why is that?
Because the point of these questions isn't to challenge you to find a good answer, it's that the process of answering them may lead to insight into your actual value system, understanding of causation, etc. Finding clever ways around the problem is a bit like cheating in an optician's eye test[1]: sure, maybe you can do that, but the result will be that you get less effective eyesight correction and end up worse off.
[1] e.g., maybe you have found a copy of whatever chart they u...
You're given the option to torture everyone in the universe, or inflict a dust speck on everyone in the universe. Either you are the only one in the universe, or there are 3^^^3 perfect copies of you (far enough apart that you will never meet.) In the latter case, all copies of you are chosen, and all make the same choice. (Edit: if they choose specks, each person gets one dust speck. This was not meant to be ambiguous.)
As it happens, a perfect and truthful predictor has declared that you will choose torture iff you are alone.
What do you do?
How does your answer change if the predictor made the copies of you conditional on their prediction?
How does your answer change if, in addition to that, you're told you are the original?