Eliezer_Yudkowsky comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 17 May 2012 09:01:24PM *  14 points [-]

That's the kind of probability I would've assigned to EURISKO destroying the world back when Lenat was the first person ever to try to build anything self-improving. For a random guy on the Internet it's off by... maybe five orders of magnitude? I would expect a pretty tiny fraction of all worlds to have the names of homebrew projects carved on their tombstones, and there are many random people on the Internet claiming to have AGI.

People like this are significant, not because of their chances of creating AGI, but because of what their inability to stop or take any serious precautions, despite their belief that they are about to create AGI, tells us about human nature.

Comment author: TheOtherDave 17 May 2012 10:35:15PM 2 points [-]

Understanding "random guy on the Internet" to mean something like an Internet user all I know about whom is that they are interested in building AGI and willing to put some concerted effort into the project... hrm... yeah, I'll accept e-7 as within my range.

My estimate for an actual random person on the Internet building AGI in, say, the next decade, has a ceiling of e-10 or so, but I don't have a clue what its lower bound is.

That said, I'm not sure how well-correlated the willingness of a "random guy on the Internet" (meaning 1) to try to build AGI without taking precautions is to the willingness of someone whose chances are orders of magnitude higher to do so.

Then again, we have more compelling lines of evidence leading us to expect humans to not take precautions.

Comment author: [deleted] 18 May 2012 09:31:15AM 0 points [-]

My estimate for an actual random person on the Internet building AGI in, say, the next decade, has a ceiling of e-10 or so, but I don't have a clue what its lower bound is.

(I had to read that three times before getting why that number was 1000 times smaller than the other one, because I kept on misinterpreting “random person”. Try “randomly-chosen person”.)

Comment author: TheOtherDave 18 May 2012 02:31:51PM 0 points [-]

I have no idea what you understood "random person" to mean, if not randomly chosen person. I'm also curious now as to whether whatever-that-is is what EY meant in the first place.

Comment author: [deleted] 18 May 2012 02:52:55PM *  0 points [-]

A stranger, esp. one behaving in weird ways; this appears to me to be the most common meaning of that word in 21st-century English when applied to a person. (Older speakers might be unfamiliar with it, but the median LWer is 25 years old, as of the latest survey.) And I also had taken the indefinite article to be an existential quantifier; hence, I had effectively interpreted the statement as “at least one actual strange person on the Internet building AGI in the next decade”, for which I thought such a low probability would be ridiculous.

Comment author: TheOtherDave 18 May 2012 03:06:34PM 0 points [-]

Thanks for clarifying.

Comment author: JoshuaZ 17 May 2012 09:04:37PM *  2 points [-]

but because of what their inability to stop or take any serious precautions, despite their belief that they are about to create AGI, tells us about human nature.

Are these in any way a representative sample of normal humans? In order to be in this category one generally needs to be pretty high on the crank scale along with some healthy Dunning-Kruger issues.

Comment author: Eliezer_Yudkowsky 17 May 2012 09:12:35PM 5 points [-]

That's always been the argument that future AGI scientists won't be as crazy as the lunatics presently doing it - that the current crowd of researchers are self-selected for incaution - but I wouldn't put too much weight on that; it seems like a very human behavior, some of the smarter ones with millions of dollars don't seem of below-average competence in any other way, and the VCs funding them are similarly incapable of backing off even when they say they expect human-level AGI to be created.

Comment author: JoshuaZ 17 May 2012 09:13:54PM 0 points [-]

Sorry, I'm confused. By "people like this" did you mean people like FinalState or did you mean professional AI researchers? I interpreted it as the first.

Comment author: Eliezer_Yudkowsky 17 May 2012 09:18:18PM *  2 points [-]

AGI researchers sound a lot like FinalState when they think they'll have AGI cracked in two years.