From a reductionist viewpoint, an agent's decision is causally determined by the physical laws of the universe ("possible world") that the agent happens to be in.
If the agents are simple computer programs that exist only in a tiny universe of a True Prisoner's Dilemma tournament, then the agents' decisions are fully determined by their source code, and we can categorize and name the agents based on provable properties of that source code.
(This assumes we treat the agents' existence and their source code as brute physical laws of the tiny universe we're imagining, rather than viewing the agents as the causal result of their programming by humans in the larger universe in which they are embedded.)
If the agents are more complicated, e.g. humans or AIs that exist in our actual universe, determining the complete causal origins of their decision process might be intractable or infeasible for many practical kinds of agents, since they are the result of a long and complicated evolutionary process governed (ultimately) by something like the Standard Model of physics.
But maybe not that intractable, given slightly more advanced technology than our own current level, and / or the right circumstances. My guess is that two humans who are both familiar with the math of logical decision theories, both hooked up to some kind of fMRI machine that they both trust and see all the outputs of, could think thoughts that correspond to neurons in their brain firing in ways that are isomorphic to the kind of thing that PrudentBot does in the tiny toy universe, in a way that is almost entirely independent of the rest of their respective causal histories.
The causes of their decisions in such circumstances are things like the setup of the fMRI machines, the participants' belief in the truth and robustness of it, and their own understanding of some relevant theory, but not much more than that.
In the case of humans, what causes those circumstances to obtain is a complicated question of evolutionary history and physics, but once they do obtain, the causal history can be screened off without issue, so it's not contradictory or circular (at least not obviously so) to talk about decisions made in those circumstances as if they are governed by a decision theory itself, rather than by the complete evolutionary / causal history.
The complications are real, there is no "resolving" them. It would be helpful for a decision theory to explain how to think about them without stumbling.
Most (all?) of the decision theory discussion I've seen has been about prediction or contradiction of choice, mostly about how to model the fact that decisions have causes, and those causes could be legible to other agents, or correlated with future experiences in ways that affect the choice under question.
Don't all proposed decision theories just move this down one level, without resolving the underlying contradiction?