Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

billswift comments on Nonsentient Optimizers - Less Wrong

16 Post author: Eliezer_Yudkowsky 27 December 2008 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

Sort By: Old

You are viewing a single comment's thread.

Comment author: billswift 28 December 2008 03:36:57PM 0 points [-]

"You've already said the friendly AI problem is terribly hard, and there's a large chance we'll fail to solve it in time. Why then do you keep adding these extra minor conditions on what it means to be "friendly", making your design task all that harder?"

I think Eliezer regards these as sub-problems, **necessary** to the creation of a Friendly AI.