I'll make sure to keep you away from my body if I ever enter a coma...
Oh don't worry, there will always be those little lapses in awareness. Even supposing you hide yourself at night, are you sure you maintain your sentience while awake? Ever closed your eyes and relaxed, felt the cool breeze, and for a moment, forgot you were aware of being aware of yourself?
"...if you lack a thousand-year-old brain that can make trillion-year plans, dying after a billion years doesn't sound sad to you"?
I'm confused as to what you're trying to say. Are you saying that dying after a billion years sounds sad to you?
Are you saying that dying after a billion years sounds sad to you?
And therefore you would have a thousand-year-old brain that can make trillion-year plans.
The problem is that the agent doesn't know what Myself() evaluates to, so it's not capable of finding an explicitly specified function whose domain is a one-point set with single element Myself() and whose value on that element is Universe(). This function exists, but the agent can't construct it in an explicit enough form to use in decision-making. Let's work with the graph of this function, which can be seen as a subset of NxN and includes a single point (Myself(), Universe()).
Instead, the agent works with an extension of that function to the domain that includes all possible actions, and not just the actual one. The graph of this extension includes a point (A, U) for each statement of the form [Myself()=C => Universe()=U] that the agent managed to prove, where A and U are explicit constants. This graph, if collected for all possible actions, is guaranteed to contain the impossible-to-locate point (Myself(), Universe()), but also contains other points. The bigger graph can then be used as a tool for the study of the elusive (Myself(), Universe()), as the graph is in a known relationship with that point, and unlike that point it's available in a sufficiently explicit form (so you can take its argmax and actually act on it).
(Finding other methods of studying (Myself(), Universe()) seems to be an important problem.)
I think I have a better understanding now.
For every statement S and for every action A, except the A Myself() actually returns, PA will contain a theorem of the form (Myself()=A) => S because falsehood implies anything. Unless Myself() doesn't halt, in which case the value of Myself() can be undecidable in PA and Myself's theorem prover wont find anything, consistent with the fact that Myself() doesn't halt.
I will assume Myself() is also filtering theorems by making sure Universe() has some minimum utility in the consequent.
If Myself() halts, then if the first theorem it finds has a false consequent PA would be inconsistent (because Myself() will return A, proving the antecedent true, proving the consequent true). I guess if this would have happened, then Myself() will be undecidable in PA.
If Myself() halts and the first theorem it finds has a true consequent then all is good with the world and we successfully made a good decision.
Whether or not ambient decision theory works on a particular problem seems to depend on the ordering of theorems it looks at. I don't see any reason to expect this ordering to be favorable.
How does ambient decision theory work with PA which has a single standard model?
It looks for statements of the form Myself()=C => Universe()=U
(Myself()=C), and (Universe()=U) should each have no free variables. This means that within a single model, their values should be constant. Thus such statements of implication establish no relationship between your action and the universe's utility, it is simply a boolean function of those two constant values.
What am I missing?
13: the entire level 4 Tegmark multiverse.
14: newly discovered level 5 Tegmarkian multiverse.
15: discover ordinal hierarchy of Tegmark universes, discover method of constructing the set of all ordinals without contradiction, create level n Tegmark universe for all n
Incorrect is a suspected Will Newsome sockpuppet and I've been told to - er - fire at will.
It was supposed to be a sarcastic response about being too strict with definitions but obviously didn't end up being funny.
I am not a Will Newsome sockpuppet. I'll refrain from making the lower quality subset of my comments henceforth.
He's really wondering whether the voxel-space is a directed graph or whether up∘down=down∘up=identity (and for left/right too). Movement could be commutative with up∘down≠identity.
Consider
voxels = {a, b}
left(a) = a
right(a) = a
up(a) = a
down(a) = a
left(b) = a
right(b) = a
up(b) = a
down(b) = a
If f is in (left, right, up, down)
let g be the respective function in (right, left, down, up)
forall x in {a, b}
f(g(x))=g(f(x))=a
But
up(down(b))=a
whereas
identity(b)=b
What mathematics to learn
There is, of course, Kahn Academy for fundamentals. We have already had a discussion on How to learn math.
What resources exist detailing which mathematics to learn in what order? What resources exist that explain the utility of different mathematical subfields for the purpose of directing studies?
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
But the axiom schema of induction does not completely exclude nonstandard numbers. Sure if I prove some property P for P(0) and for all n, P(n) => P(n+1) then for all n, P(n); then I have excluded the possibility of some nonstandard number "n" for which not P(n) but there are some properties which cannot be proved true or false in Peano Arithmetic and therefore whose truth hood can be altered by the presence of nonstandard numbers.
Can you give me a property P which is true along the zero-chain but necessarily false along a separated chain that is infinitely long in both directions? I do not believe this is possible but I may be mistaken.