Nick_Tarleton comments on The Magnitude of His Own Folly - Less Wrong

26 Post author: Eliezer_Yudkowsky 30 September 2008 11:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (127)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Nick_Tarleton 02 October 2008 12:52:00AM 2 points [-]

For starters, saying that he wants to save humanity contradicts this.

Does not follow.

what an AI society would look like

No such thing, for many (most?) possible AIs; just a monolithic maximizer.

Eliezer's plan seems to enslave AIs forever for the benefit of humanity; and this is morally reprehensible

Michael Vassar: RPOP "slaves"

Eliezer is paving the way for a confrontational relationship between humans and AIs, based on control

CFAI: Beyond the adversarial attitude

Planning to keep AIs enslaved forever is unworkable; it would hold us back from becoming AIs ourselves

Could I become superintelligent under a Sysop?