RobbBB comments on Bridge Collapse: Reductionism as Engineering Problem - Less Wrong

44 Post author: RobbBB 18 February 2014 10:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (61)

You are viewing a single comment's thread. Show more comments above.

Comment author: Adele_L 18 February 2014 06:35:44AM 1 point [-]

Thank you, this helps clarify things for me.

Yes, the AI won't care about self-preservation; but it also won't care about any other interim values we'd like to program it with, except ones that amount to patterns of sensory experience for the AI.

I get why AIXI would behave like this, but it's not obvious to me that all Cartesian AIs would probably have this problem. If the AI has some model of the world, and this model can still update (mostly correctly) based on what the sensory channel inputs, and predict (mostly correctly) how different outputs can change the world, it seems like it could still try to maximize making as many paperclips as possible according to its model of the world. Does that make sense?

Comment author: RobbBB 19 February 2014 04:56:06AM 4 points [-]

Alex Mennen designed a Cartesian with preferences over its environment: A utility-maximizing variant of AIXI.