John_Maxwell_IV comments on Brainstorming additional AI risk reduction ideas - Less Wrong

12 Post author: John_Maxwell_IV 14 June 2012 07:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread.

Comment author: John_Maxwell_IV 15 June 2012 05:57:51PM 0 points [-]

Someone sent me this anonymous suggestion:

Well, let’s consider AIXI-tl properly, mathematically, without the 'what would I do in it’s shoes’ idiocy and without incompetent “let’s just read the verbal summary”. The AIXI-tl

1: looks for a way to make the button pressed.

2: actually, not even that; it does not relate itself to the representation of itself inside it’s representation of the world, and can’t model world going on without itself. It can’t understand death. It’s internal model is dualist.

It is an AI that won’t stop you from shutting it down. If you try to resolve 2, then you hit another very hard problem, wireheading.

Those two problems naturally stay in the way of creation of AI that kills everyone, or AI that wants to bring about heaven on earth, but they are entirely irrelevant to the creation of useful AI in general. Thus the alternative approach to AI risk reduction is to withdraw all funding from SI or any other organization working on philosophy of mind for AI, as those organizations create the risk of AGI that solves those two very hard problems which prevent arbitrary useful AI from killing us all.

Comment author: jacob_cannell 16 June 2012 07:22:00AM *  0 points [-]

Just a guess, but this sounds very much like.