In response to draft
Comment author: Incorrect 22 September 2012 10:56:04PM *  0 points [-]

<div class=

" title="" />

Submitting...

Comment author: jeremysalwen 21 September 2012 01:44:11AM 5 points [-]

I'll make sure to keep you away from my body if I ever enter a coma...

Comment author: Incorrect 21 September 2012 04:58:43AM 10 points [-]

Oh don't worry, there will always be those little lapses in awareness. Even supposing you hide yourself at night, are you sure you maintain your sentience while awake? Ever closed your eyes and relaxed, felt the cool breeze, and for a moment, forgot you were aware of being aware of yourself?

Comment author: Incorrect 19 September 2012 04:33:47PM *  1 point [-]

Bug:

<div class=

" title="" />

Submitting...

Comment author: Hawisher 17 September 2012 02:04:16AM 0 points [-]

"...if you lack a thousand-year-old brain that can make trillion-year plans, dying after a billion years doesn't sound sad to you"?

I'm confused as to what you're trying to say. Are you saying that dying after a billion years sounds sad to you?

Comment author: Incorrect 17 September 2012 04:19:29AM 1 point [-]

Are you saying that dying after a billion years sounds sad to you?

And therefore you would have a thousand-year-old brain that can make trillion-year plans.

Comment author: Vladimir_Nesov 15 September 2012 10:04:08AM *  2 points [-]

The problem is that the agent doesn't know what Myself() evaluates to, so it's not capable of finding an explicitly specified function whose domain is a one-point set with single element Myself() and whose value on that element is Universe(). This function exists, but the agent can't construct it in an explicit enough form to use in decision-making. Let's work with the graph of this function, which can be seen as a subset of NxN and includes a single point (Myself(), Universe()).

Instead, the agent works with an extension of that function to the domain that includes all possible actions, and not just the actual one. The graph of this extension includes a point (A, U) for each statement of the form [Myself()=C => Universe()=U] that the agent managed to prove, where A and U are explicit constants. This graph, if collected for all possible actions, is guaranteed to contain the impossible-to-locate point (Myself(), Universe()), but also contains other points. The bigger graph can then be used as a tool for the study of the elusive (Myself(), Universe()), as the graph is in a known relationship with that point, and unlike that point it's available in a sufficiently explicit form (so you can take its argmax and actually act on it).

(Finding other methods of studying (Myself(), Universe()) seems to be an important problem.)

Comment author: Incorrect 15 September 2012 05:14:30PM 0 points [-]

I think I have a better understanding now.

For every statement S and for every action A, except the A Myself() actually returns, PA will contain a theorem of the form (Myself()=A) => S because falsehood implies anything. Unless Myself() doesn't halt, in which case the value of Myself() can be undecidable in PA and Myself's theorem prover wont find anything, consistent with the fact that Myself() doesn't halt.

I will assume Myself() is also filtering theorems by making sure Universe() has some minimum utility in the consequent.

If Myself() halts, then if the first theorem it finds has a false consequent PA would be inconsistent (because Myself() will return A, proving the antecedent true, proving the consequent true). I guess if this would have happened, then Myself() will be undecidable in PA.

If Myself() halts and the first theorem it finds has a true consequent then all is good with the world and we successfully made a good decision.

Whether or not ambient decision theory works on a particular problem seems to depend on the ordering of theorems it looks at. I don't see any reason to expect this ordering to be favorable.

Comment author: Incorrect 15 September 2012 06:37:27AM 1 point [-]

How does ambient decision theory work with PA which has a single standard model?

It looks for statements of the form Myself()=C => Universe()=U

(Myself()=C), and (Universe()=U) should each have no free variables. This means that within a single model, their values should be constant. Thus such statements of implication establish no relationship between your action and the universe's utility, it is simply a boolean function of those two constant values.

What am I missing?

Comment author: Armok_GoB 12 September 2012 09:04:13PM 5 points [-]

13: the entire level 4 Tegmark multiverse.

14: newly discovered level 5 Tegmarkian multiverse.

Comment author: Incorrect 12 September 2012 09:13:01PM *  5 points [-]

15: discover ordinal hierarchy of Tegmark universes, discover method of constructing the set of all ordinals without contradiction, create level n Tegmark universe for all n

Comment author: Alicorn 12 September 2012 04:55:41AM 7 points [-]

Incorrect is a suspected Will Newsome sockpuppet and I've been told to - er - fire at will.

Comment author: Incorrect 12 September 2012 04:47:56PM *  9 points [-]

It was supposed to be a sarcastic response about being too strict with definitions but obviously didn't end up being funny.

I am not a Will Newsome sockpuppet. I'll refrain from making the lower quality subset of my comments henceforth.

Comment author: [deleted] 12 September 2012 03:31:06AM 2 points [-]

When a human moderator makes a judgment call.

Comment author: Incorrect 12 September 2012 03:41:15AM -12 points [-]

Define human, moderator, judgement call, makes, and "when".

Comment author: Alicorn 09 September 2012 04:12:04AM 4 points [-]

I have begun to suspect that Incorrect is a Will sockpuppet. Please cease to feed.

Comment author: Incorrect 09 September 2012 04:18:18AM 5 points [-]

View more: Prev | Next