Cyan comments on Solomonoff Cartesianism - Less Wrong

21 Post author: RobbBB 02 March 2014 05:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Cyan 05 March 2014 04:12:03PM *  1 point [-]

you and programs like you make up a small amount of measure in AIXI's beliefs

I understand that this is the claim, but my intuition is that, supposing that AIXI has observed a long enough sequence to have as good an idea as I do of how the world is put together, I and programs like me (like "naturalized induction") are the shortest of the survivors, and hence dominate AIXI's predictions. Basically, I'm positing that after a certain point, AIXI will notice that it is embodied and doesn't have a soul, for essentially the same reason that I have noticed those things: they are implications of the simplest explanations consistent with the observations I have made so far.

Comment author: cousin_it 06 March 2014 03:42:23PM *  2 points [-]

Why couldn't it also be a program that has predictive powers similar to yours, but doesn't care about avoiding death?

Comment author: Cyan 06 March 2014 11:33:40PM *  1 point [-]

Well, I guess it could, but that isn't the claim being put forth in the OP.

(Unlike some around these parts, I see a clear distinction between an agent's posterior distribution and the agent's posterior-utility-maximizing part. From the outside, expected-utility-maximizing agents form an equivalence class such that all agents with the same <product of prior and utility function> are equivalent, and we need only consider the quotient space of agents; from the inside, the epistemic and value-laden parts of an agent can thought of separately.)

Comment author: Nisan 05 March 2014 07:10:28PM 1 point [-]

Oh, I see what you're saying now.