JoshuaZ comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 17 May 2012 09:04:37PM *  2 points [-]

but because of what their inability to stop or take any serious precautions, despite their belief that they are about to create AGI, tells us about human nature.

Are these in any way a representative sample of normal humans? In order to be in this category one generally needs to be pretty high on the crank scale along with some healthy Dunning-Kruger issues.

Comment author: Eliezer_Yudkowsky 17 May 2012 09:12:35PM 5 points [-]

That's always been the argument that future AGI scientists won't be as crazy as the lunatics presently doing it - that the current crowd of researchers are self-selected for incaution - but I wouldn't put too much weight on that; it seems like a very human behavior, some of the smarter ones with millions of dollars don't seem of below-average competence in any other way, and the VCs funding them are similarly incapable of backing off even when they say they expect human-level AGI to be created.

Comment author: JoshuaZ 17 May 2012 09:13:54PM 0 points [-]

Sorry, I'm confused. By "people like this" did you mean people like FinalState or did you mean professional AI researchers? I interpreted it as the first.

Comment author: Eliezer_Yudkowsky 17 May 2012 09:18:18PM *  2 points [-]

AGI researchers sound a lot like FinalState when they think they'll have AGI cracked in two years.