Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Nick_Tarleton comments on Nonsentient Optimizers - Less Wrong

16 Post author: Eliezer_Yudkowsky 27 December 2008 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Nick_Tarleton 27 December 2008 09:16:09AM 1 point [-]

Unknown, what's the difference between a "regular chess playing program" and an "AI that plays chess"? Taboo "intelligence", and think of it as pure physics and math. An "AI" is a physical system that moves the universe into states where its goals are fulfilled; a "Friendly AI" is such a system whose goals accord with human morality. Why would there be no such systems that we can prove never do certain things?

(Not to mention that, as I understand it, an FAI's outputs aren't rigidly restricted in the Asimov laws sense; it can kill people in the unlikely event that that's the right thing to do.)