RobbBB comments on Solomonoff Cartesianism - Less Wrong

21 Post author: RobbBB 02 March 2014 05:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobbBB 02 March 2014 03:09:13AM 1 point [-]

Thanks for your comments, V_V. I apologize for not engaging with them much, but I wanted to get introductory material on AIXI (and the anvil problem, etc.) posted before wading into the debate, so more people could benefit from seeing it.

Concerning immortalism: No living human has ever experienced death, but we successfully predict and avoid death, and not just because evolution has programmed us to avoid things that looked threatening in our ancestral environment. We look at other agents and generalize from their case to our own.

Concerning preference solipsism: See footnote 10. Human-style (irrational) wireheading is different from AIXI-style (rational) reward channel seizure. Cartesians can partly solve this problem, but not completely, because some valuable and disvaluable states of affairs aren't in their hypothesis space.

Comment author: V_V 02 March 2014 02:59:13PM *  2 points [-]

No living human has ever experienced death, but we successfully predict and avoid death, and not just because evolution has programmed us to avoid things that looked threatening in our ancestral environment. We look at other agents and generalize from their case to our own.

We have some innate repulsion towards "scary things" (cliffs, snakes, etc.), but more generally, we have an innate concept of being dead, and we assume that states of the world were we are dead generate low reward, even if we never get to experience that. Then we use our induction abilities to learn how our body works and what can make it dead.

Concerning preference solipsism: See footnote 10. Human-style (irrational) wireheading is different from AIXI-style (rational) reward channel seizure. Cartesians can partly solve this problem, but not completely, because some valuable and disvaluable states of affairs aren't in their hypothesis space.

If you consider wireheading in the more general meaning of obtaining rewards by behaving in ways you were not intended to, then humans can do it, both with respect to evolutionary fitness (e.g. by having sex with contraceptives) and with respect to social rewards (e.g. Campbell's law).