This seems like a natural question, but I haven't been able to find this topic discussed online.
We are in a world with two continents on opposite sides. Each continent contains a single nation with nuclear weapons. Each side has one person with a button in front of them that will launch nuclear weapons at the other continent to wipe them out without wiping themselves out. Both countries are able to perfectly spy on the other and has access to a narrow AI which can predict human behaviors with near-perfect accuracy (and has not made a mistake yet after many trials).
For some reason, you were put in front of this button on one side. One day, you see that the other side has pressed the button and the weapons are on their way over. Pressing the button will wipe out humanity permanently.
- Is it fair to say that an adamant Causal Decision Theorist would not press the button and an adamant Evidential Decision Theorist would?
- What would you do? Is there more context needed to make this decision?
It is important how much they prefer each outcome. All these decision theories work with utilities, not just with preference orderings over outcomes. You could derive utilities from preference ordering over lotteries, e.g. would they prefer a 50:50 split between winning and extinction over the status quo? What about 10:90 or 90:10?
There are also unknown probabilities such as chance of failure of each AI to predict the other side correctly, chance of false positive or negative launch detection, chance that someone is deliberately feeding false information to some participants, chance of a launch without having pressed the button or failure to launch after pressing the button, chance that this is just a simulation, and so on. Even if they're small probabilities, they are critical to this scenario.