Dynamically_Linked comments on The Magnitude of His Own Folly - Less Wrong

26 Post author: Eliezer_Yudkowsky 30 September 2008 11:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (127)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Dynamically_Linked 30 September 2008 02:56:50PM 7 points [-]

Eliezer, after you realized that attempting to build a Friendly AI is harder and more dangerous than you thought, how far did you back-track in your decision tree? Specifically, did it cause you to re-evaluate general Singularity strategies to see if AI is still the best route? You wrote the following on Dec 9 2002, but it's hard to tell whether it's before or after your "late 2002" realization.

I for one would like to see research organizations pursuing human intelligence enhancement, and would be happy to offer all the ideas I thought up for human enhancement when I was searching through general Singularity strategies before specializing in AI, if anyone were willing to cough up, oh, at least a hundred million dollars per year to get started, and if there were some way to resolve all the legal problems with the FDA.

Hence the Singularity Institute "for Artificial Intelligence". Humanity is simply not paying enough attention to support human enhancement projects at this time, and Moore's Law goes on ticking.

Aha, a light bulb just went off in my head. Eliezer did reevaluate, and this blog is his human enhancement project!