All right; I stated that incorrectly. We can regard the actions of deterministic agents as "choices"; we do it all the time in game theory. What I was trying to get at is not the choice of A or B, but the choice to use TDT.
It sounds to me like TDT is not addressing the question, "How do I convince an agent to adopt TDT?" It assumes that you're designing agents, and you design them to implement TDT, so that they can get better results on PD when there are many of them.
But then it's not fundamentally different from designing agents with a utility function that includes benefits to other agents. In other words, it's smuggling in an ethical judgement, disguising it as a mathematical truth.
If you're a human, why would you adopt TDT? The reason must be one that is not an answer to "why would you cooperate?", nor to "why would you tell people you have adopted TDT?"
You might be able to prove that utility functions and decision theories are equivalent; meaning that any change to a utility function could alternatively be implemented as a change to the decision theory, and vice-versa. Thus, asking someone to change a decision theory may be nonsensical.
So, maybe the answer I'm looking for is that TDT is not a thing meant to be offered to humans; it's a more-parsimonious way than ethics of arriving at cooperation on PD. That would be acceptable.
But to really be useful, TDT must be an evolutionarily-stable decision theory. I'm skeptical of that. The "ethics" approach lets evolution pick and choose only those values that help the agent. The TDT approach is not so flexible; it may force the agent to cooperate in situations where it is not beneficial to the agent.
It's true that people just "make decisions", and don't have access to their source code. They can, however lift decisions to the conscious level and decide to follow an algorithm.
Do you have any problem with economists convincing people that it's in their best interest to figure out what the Nash equilibrium is and play that? They are arguing for an algorithm, and people can indeed decide to "follow the algorithm" rather than "just picking their gut choice", can't they? At least, when they have explicit payoffs available, ...
Update: This post has also been superseded - new comments belong in the latest thread.
The second thread has now also exceeded 500 comments, so after 42 chapters of MoR it's time for a new thread.
From the first thread: