You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on I think I've found the source of what's been bugging me about "Friendly AI" - Less Wrong Discussion

8 Post author: ChrisHallquist 10 June 2012 02:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 10 June 2012 02:19:01PM *  9 points [-]

Edit: It's now fixed.

A non-doomsday machine (the AI "for which no wish is safe.")

In Eliezer's quote, "genies for which no wish is safe" are those that kill you irrespective of what wish you made, while here it's written as if you might be referring to AIs that are safe even if you make no wish, which is different. This should be paraphrased for clarity, whatever the intended meaning.

Comment author: vi21maobk9vp 10 June 2012 02:22:48PM 7 points [-]

Or maybe the parenthesis refere only to "doomsday machine"

Comment author: evand 10 June 2012 03:16:32PM 4 points [-]

That's how I read it. The wording could be clearer.

Comment author: ChrisHallquist 11 June 2012 07:02:38AM -1 points [-]

This is the intended reading. Edited for clarity.

Comment author: private_messaging 11 June 2012 07:21:40PM *  0 points [-]

Well, there's the systems that simply can't process your wishes (AIXI for instance), but which you can use to e.g. cure cancer if you wish (you could train it to do what you tell it to but all it is looking for is sequence that leads to reward button press, which is terminal - no value for button being held). Just as there is a system, screwdriver, which I can use to unscrew screws, if I wish, but it's not a screw unscrewing genie.