This seems like a natural question, but I haven't been able to find this topic discussed online.
We are in a world with two continents on opposite sides. Each continent contains a single nation with nuclear weapons. Each side has one person with a button in front of them that will launch nuclear weapons at the other continent to wipe them out without wiping themselves out. Both countries are able to perfectly spy on the other and has access to a narrow AI which can predict human behaviors with near-perfect accuracy (and has not made a mistake yet after many trials).
For some reason, you were put in front of this button on one side. One day, you see that the other side has pressed the button and the weapons are on their way over. Pressing the button will wipe out humanity permanently.
- Is it fair to say that an adamant Causal Decision Theorist would not press the button and an adamant Evidential Decision Theorist would?
- What would you do? Is there more context needed to make this decision?
For the second paragraph, we're assuming this AI has not made a mistake in predicting human behavior yet after many, many trials in different scenarios. No exact probability. We're also assuming perfect levels of observation, so we know that they pressed a button, bombs are heading over, and any observable context behind the decision (like false information).
The first paragraph contains an idea I hadn't considered, and it might be central to the whole thing. I'll ponder it more.