- Permalink
I'll make sure to keep you away from my body if I ever enter a coma...
Oh don't worry, there will always be those little lapses in awareness. Even supposing you hide yourself at night, are you sure you maintain your sentience while awake? Ever closed your eyes and relaxed, felt the cool breeze, and for a moment, forgot you were aware of being aware of yourself?
"...if you lack a thousand-year-old brain that can make trillion-year plans, dying after a billion years doesn't sound sad to you"?
I'm confused as to what you're trying to say. Are you saying that dying after a billion years sounds sad to you?
Are you saying that dying after a billion years sounds sad to you?
And therefore you would have a thousand-year-old brain that can make trillion-year plans.
The problem is that the agent doesn't know what Myself() evaluates to, so it's not capable of finding an explicitly specified function whose domain is a one-point set with single element Myself() and whose value on that element is Universe(). This function exists, but the agent can't construct it in an explicit enough form to use in decision-making. Let's work with the graph of this function, which can be seen as a subset of NxN and includes a single point (Myself(), Universe()).
Instead, the agent works with an extension of that function to the domain that includes all possible actions, and not just the actual one. The graph of this extension includes a point (A, U) for each statement of the form [Myself()=C => Universe()=U] that the agent managed to prove, where A and U are explicit constants. This graph, if collected for all possible actions, is guaranteed to contain the impossible-to-locate point (Myself(), Universe()), but also contains other points. The bigger graph can then be used as a tool for the study of the elusive (Myself(), Universe()), as the graph is in a known relationship with that point, and unlike that point it's available in a sufficiently explicit form (so you can take its argmax and actually act on it).
(Finding other methods of studying (Myself(), Universe()) seems to be an important problem.)
I think I have a better understanding now.
For every statement S and for every action A, except the A Myself() actually returns, PA will contain a theorem of the form (Myself()=A) => S because falsehood implies anything. Unless Myself() doesn't halt, in which case the value of Myself() can be undecidable in PA and Myself's theorem prover wont find anything, consistent with the fact that Myself() doesn't halt.
I will assume Myself() is also filtering theorems by making sure Universe() has some minimum utility in the consequent.
If Myself() halts, then if the first theorem it finds has a false consequent PA would be inconsistent (because Myself() will return A, proving the antecedent true, proving the consequent true). I guess if this would have happened, then Myself() will be undecidable in PA.
If Myself() halts and the first theorem it finds has a true consequent then all is good with the world and we successfully made a good decision.
Whether or not ambient decision theory works on a particular problem seems to depend on the ordering of theorems it looks at. I don't see any reason to expect this ordering to be favorable.
How does ambient decision theory work with PA which has a single standard model?
It looks for statements of the form Myself()=C => Universe()=U
(Myself()=C), and (Universe()=U) should each have no free variables. This means that within a single model, their values should be constant. Thus such statements of implication establish no relationship between your action and the universe's utility, it is simply a boolean function of those two constant values.
What am I missing?
13: the entire level 4 Tegmark multiverse.
14: newly discovered level 5 Tegmarkian multiverse.
15: discover ordinal hierarchy of Tegmark universes, discover method of constructing the set of all ordinals without contradiction, create level n Tegmark universe for all n
Incorrect is a suspected Will Newsome sockpuppet and I've been told to - er - fire at will.
It was supposed to be a sarcastic response about being too strict with definitions but obviously didn't end up being funny.
I am not a Will Newsome sockpuppet. I'll refrain from making the lower quality subset of my comments henceforth.
When a human moderator makes a judgment call.
Define human, moderator, judgement call, makes, and "when".
I have begun to suspect that Incorrect is a Will sockpuppet. Please cease to feed.
Here's a conversation I had with Will a while back:
http://lesswrong.com/lw/cw1/open_problems_related_to_solomonoff_induction/6rlr?context=1#6rlr