christopherj comments on The Problem with AIXI - Less Wrong

24 Post author: RobbBB 18 March 2014 01:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread.

Comment author: christopherj 08 April 2014 04:53:44AM *  1 point [-]

I'm having trouble understanding how something generally intelligent in every respect except failure to understand death or that it has a physical body, could be incapable of ever learning or at least acting indistinguishable from one that does know.

For example, how would AIXI act if given the following as part of its utility function: 1) utility function gets multiplied by zero should a certain computer cease to function 2) utility function gets multiplied by zero should certain bits be overwritten except if a sanity check is passed first

Seems to me that such an AI would act as if it had a genocidally dangerous fear of death, even if it doesn't actually understand the concept.

Comment author: Quill_McGee 25 March 2015 01:00:51AM 0 points [-]

That AI doesn't drop an anvil on its head(I think...), but it also doesn't self-improve.