khafra comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: khafra 16 May 2012 06:52:23PM 5 points [-]

I am in the final implementation stages of the general intelligence algorithm.

Do you mean "I am in the final writing stages of a paper on a general intelligence algorithm?" If you were in the final implementation stages of what LW would recognize as the general intelligence algorithm, the very last thing you would want to do is mention that fact here; and the second-to-last thing you'd do would be to worry about personal credit.

Comment author: FinalState 16 May 2012 07:19:06PM -2 points [-]

I am open to arguments as to why that might be the case, but unless you also have the GIA, I should be telling you what things I would want to do first and last. I don't really see what the risk is, since I haven't given anyone any unique knowledge that would allow them to follow in my footsteps.

A paper? I'll write that in a few minutes after I finish the implementation. Problem statement -> pseudocode -> implementation. I am just putting some finishing touches on the data structure cases I created to solve the problem.

Comment author: Bugmaster 16 May 2012 10:20:16PM 4 points [-]

I don't really see what the risk is...

As far as I understand, the SIAI folks believe that the risk is, "you push the Enter key, your algorithm goes online, bootstraps itself to transhuman superintelligence, and eats the Earth with nanotechnology" (nanotech is just one possibility among many, of course). I personally don't believe we're in any danger of that happening any time soon, but these guys do. They have made it their mission in life to prevent this scenario from happening. Their mission and yours appear to be in conflict.