I've been trying for a while to make sense of the various alternate decision theories discussed here at LW, and have kept quiet until I thought I understood something well enough to make a clear contribution. Here goes.
You simply cannot reason about what to do by referring to what program you run, and considering the other instances of that program, for the simple reason that: there is no unique program that corresponds to any physical object.
Yes, you can think of many physical objects O as running a program P on data D, but there are many many ways to decompose an object into program and data, as in O = <P,D>. At one extreme you can think of every physical object as running exactly the same program, i.e., the laws of physics, with its data being its particular arrangements of particles and fields. At the other extreme, one can think of each distinct physical state as a distinct program, with an empty unused data structure. Inbetween there are an astronomical range of other ways to break you into your program P and your data D.
Eliezer's descriptions of his "Timeless Decision Theory", however refer often to "the computation" as distinguished from "its input" in this "instantiation" as if there was some unique way to divide a physical state into these two components. For example:
The one-sentence version is: Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.
The three-sentence version is: Factor your uncertainty over (impossible) possible worlds into a causal graph that includes nodes corresponding to the unknown outputs of known computations; condition on the known initial conditions of your decision computation to screen off factors influencing the decision-setup; compute the counterfactuals in your expected utility formula by surgery on the node representing the logical output of that computation.
Timeless decision theory, in which the (Godelian diagonal) expected utility formula is written as follows: Argmax[A in Actions] in Sum[O in Outcomes](Utility(O)*P(this computation yields A []-> O|rest of universe)) ... which is why TDT one-boxes on Newcomb's Problem - both your current self's physical act, and Omega's physical act in the past, are logical-causal descendants of the computation, and are recalculated accordingly inside the counterfactual. ... Timeless decision theory can state very definitely how it treats the various facts, within the interior of its expected utility calculation. It does not update any physical or logical parent of the logical output - rather, it conditions on the initial state of the computation, in order to screen off outside influences; then no further inferences about them are made.
These summaries give the strong impression that one cannot use this decision theory to figure out what to decide until one has first decomposed one's physical state into one's "computation" as distinguished from one's "initial state" and its followup data structures eventually leading to an "output." And since there are many many ways to make this decomposition, there can be many many decisions recommended by this decision theory.
The advice to "choose as though controlling the logical output of the abstract computation you implement" might have you choose as if you controlled the actions of all physical objects, if you viewed the laws of physics as your program, or choose as if you only controlled the actions of the particular physical state that you are, if every distinct physical state is a different program.
I propose the following formalization. The "program" is everything that we can control fully and hold constant between all situations given in the problem. The "data" is everything else.
Which things we want to hold constant and which things vary depend on the problem we're considering. In ordinary game theory, the program is a complete strategy, which we assume is memorized before the beginning and followed perfectly, and the data is some set of observations made between the start of the game and some decision point within it. Problems may force us to move things that are normally part of the program into the state, by taking them out of our control. For example, when reasoning about how a company should act in relation to a market, we treat everything that decides what the corporation does as a black box program, and the observations it makes of the market as its input data. If internal politics matter, then we have to narrow the black-boxing boundary to only ourselves. If we're worried about akrasia or mind control, then we draw the boundary inside our own mind.
Whether something is Program or Data is not a property of the object itself, but rather of how we reason about it. If it can be fully modeled as a black box function, then it's part of the program; otherwise it's data.
If Functional Programming and LISP has taught me anything is that all "programs" are "data". The boundary between data and code is blurry at least. We are all instances of "data" that is executed on the machine known as the "Universe". (I think this kind of Cartesian duality will lead to other dualities and I don't think we need "soul" and "body" mixed into this talk)