This seems like a natural question, but I haven't been able to find this topic discussed online.
We are in a world with two continents on opposite sides. Each continent contains a single nation with nuclear weapons. Each side has one person with a button in front of them that will launch nuclear weapons at the other continent to wipe them out without wiping themselves out. Both countries are able to perfectly spy on the other and has access to a narrow AI which can predict human behaviors with near-perfect accuracy (and has not made a mistake yet after many trials).
For some reason, you were put in front of this button on one side. One day, you see that the other side has pressed the button and the weapons are on their way over. Pressing the button will wipe out humanity permanently.
- Is it fair to say that an adamant Causal Decision Theorist would not press the button and an adamant Evidential Decision Theorist would?
- What would you do? Is there more context needed to make this decision?
For FDT and "timeless" decision theories and similar, it still depends upon the actual preferences of the agent, but is a much more messy problem.
The agents on each side can be in one of at least three epistemic positions:
However, you are stated to be in scenario (4) so something of very low credence has occurred. You should notice you are confused, and consider alternative hypotheses! Maybe your hyper-reliable AI has failed, or your (hopefully reliable!) launch detection system has failed, or their AI has failed, or their launch systems triggered accidentally, or you are in a training exercise without having been told, or ... etc.
Ordinary decision and game theory based on only the mainline scenarios will not help you here.