I don't follow and understand the "timeless decision" topic on LW, but I have a feeling that a significant part of that is one agent predicting what other agent would do, by simulating their algorithm.
Thinking in terms of "simulating their algorithm" is convenient for us because we can imagine the agent doing it and for certain problems a simulation is sufficient. However the actual process involved is any reasoning at all based on the algorithm. That includes simulations but also includes creating mathematical proofs based on the algorithm that allow generalizable conclusions about things that the other agent will or will not do.
An agent that wishes to facilitate cooperation - or that wishes to prove credible threat - will actually prefer to structure their own code such that it is as easy as possible to make proofs and draw conclusions from that code.
creating mathematical proofs based on the algorithm that allow generalizable conclusions about things that the other agent will or will not do.
It's precisely this part which is impossible in a general case. You can reason only about a subset of algorithms which are compatible with your conclusion-making algorithm.
Proof:
1) It is impossible to guess if the program will stop computation if a finite time in a general case.
Proof by contradiction: Let's suppose we have a method "Prophet.willStop(program)" that predicts whether given program will st...
If it's worth saying, but not worth its own post, even in Discussion, it goes here.