Eliezer_Yudkowsky comments on Failure By Analogy - Less Wrong

15 Post author: Eliezer_Yudkowsky 18 November 2008 02:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 18 November 2008 07:44:02AM 1 point [-]

I agree that if you were willing to (throw ethics out the window and) run flawed uploads to debug them, then the uploading project would eventually succeed to arbitrary precision - even if you (artificially?) blocked off the possibility of understanding any of the information above a certain level.

Since both uploading and fully-understood-AGI will both get there "eventually" given the indefinite prolongation of the human idiom of scientific progress, the question is which is likely to get there first (or intelligence enhancement via neurohacking, etc.)

I would also question whether any neural system or circuit (such as e.g. the running robot described earlier) has ever reproduced a useful biological function with the researchers involved not understanding how the higher level works, and just studying the individual biological neural behaviors to very fine accuracy. I doubt it has yet happened, but am willing to be corrected.