You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

timtyler comments on Q&A with Richard Carrier on risks from AI - Less Wrong Discussion

16 Post author: XiXiDu 13 December 2011 10:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread.

Comment author: timtyler 13 December 2011 01:03:18PM *  4 points [-]

P(voluntary human extinction by replacement | any AGI at all) = 90%+

Hmm. Information theoretic extinction seems pretty unlikely to me. Humanity will live on in the history "books" about major transitions - and the "books" at that stage will no doubt be pretty fancy - with multiple "instantiated" humans.

P(involuntary human extinction without replacement | badly done AGI type (a)) = < 10^-20

I don't think that's very likely either, but 10^-20 seems to be an overconfident probability for it.

And even if I were to rank them, extinction by comet, asteroid or cosmological gamma ray burst vastly outranks any manmade cause. Even extinction by supervolcano vastly outranks any manmade cause.

A rather bizarre view, IMHO. I think that only a few would agree with this.