No, it's not fair to say either of those. In this scenario both CDT and EDT are identical, because the outcomes of the possible actions are known with certainty. The decision only depends upon which of these outcomes the person in front of the button prefers, and that isn't stated. The AI is irrelevant to the decision for both.
There are other decision theories in which the decision may depend upon counterfactuals and the behaviour of the AI in addition to the known outcomes of the actions. Maybe you meant to ask about one of those?
Thanks for your answer; this explains why I was not able to find any related discussion on this. I read this article recently: https://www.lesswrong.com/posts/c3wWnvgzdbRhNnNbQ/timeless-decision-theory-problems-i-can-t-solve and misremembered it as a defense of evidential decision theory, instead of a different decision theory altogether.
So from a timeless decision theory perspective, is it correct to say that one would press the button? And from both EDT/CDT perspectives, one would not press the button (assuming they value the other country staying alive over no country staying alive)?
This seems like a natural question, but I haven't been able to find this topic discussed online.
We are in a world with two continents on opposite sides. Each continent contains a single nation with nuclear weapons. Each side has one person with a button in front of them that will launch nuclear weapons at the other continent to wipe them out without wiping themselves out. Both countries are able to perfectly spy on the other and has access to a narrow AI which can predict human behaviors with near-perfect accuracy (and has not made a mistake yet after many trials).
For some reason, you were put in front of this button on one side. One day, you see that the other side has pressed the button and the weapons are on their way over. Pressing the button will wipe out humanity permanently.