Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on Total Nano Domination - Less Wrong

11 Post author: Eliezer_Yudkowsky 27 November 2008 09:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 28 November 2008 03:14:14AM 3 points [-]

I can accelerate the working-out of FAI theory by applying my own efforts and by recruiting others. Messing with macro tech developmental forces to slow other people down doesn't seem to me to be something readily subject to my own decision.

I don't trust that human intelligence enhancement can beat AI of either sort into play - it seems to me to be running far behind at the moment. So I'm not willing to slow down and wait for it.

Regarding the CIA thing, I have ethics.

It's worth noting that even if you consider, say, gentle persuasion, in a right-tail problem, eliminating 90% of the researchers doesn't get you 10 times as much time, just one standard deviation's worth of time or whatever.

The sort of theory that goes into hacking up an unFriendly AI and the sort of theory that goes into Friendly AI are pretty distinct as subjects.