You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gRR comments on Muehlhauser-Goertzel Dialogue, Part 1 - Less Wrong Discussion

28 Post author: lukeprog 16 March 2012 05:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: gRR 17 March 2012 12:52:20PM 0 points [-]

It's a direct logical consequence, isn't it? If one doesn't have a precise understanding of AI's goals, then whatever goals one imparts into AI won't be precise. And they must be precise, or (step3) => disaster.

Comment author: Vladimir_Nesov 17 March 2012 12:54:53PM *  1 point [-]

He doesn't agree that they must be precise, so I guess step 3 is also out.

Comment author: gRR 17 March 2012 01:34:17PM *  2 points [-]

He can't think that god-powerfully optimizing for a forever-fixed not-precisely-correct goal would lead to anything but disaster. Not if he ever saw a non-human optimization process at work.

So he can only think precision is not important if he believes that
(1) human values are an attractor in the goal space, and any reasonably close goals would converge there before solidifying, and/or
(2) acceptable human values form a large convex region within the goal space, and optimizing for any point within this region is correct.

Without better understanding of AI goals, both can only be an article of faith...

Comment author: Alex_Altair 19 March 2012 01:56:08AM 0 points [-]

From the conversation with Luke, he apparently accepts faith.