John_Maxwell_IV comments on A Primer On Risks From AI - Less Wrong

15 Post author: XiXiDu 24 March 2012 02:32PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread. Show more comments above.

Comment author: John_Maxwell_IV 26 March 2012 06:54:35PM 0 points [-]

What level on the disagreement hierarchy would you rate this comment of yours?

http://www.paulgraham.com/disagree.html

It looks like mostly DH3 to me, with a splash of DH1 in implying that anyone who suggests that our future isn't guaranteed to be bright must be selling something.

There's a bit of DH4 in implying that this is an uncommon position, which implies very weakly that it's incorrect. I don't think this is a very uncommon position though:
* http://www.ted.com/talks/lang/en/martin_rees_asks_is_this_our_final_century.html
* http://www.ted.com/talks/stephen_petranek_counts_down_to_armageddon.html
* http://www.ted.com/talks/jared_diamond_on_why_societies_collapse.html
* http://www.wired.com/wired/archive/8.04/joy.html

And Stephen Hawking on AI:
* http://www.zdnet.com/news/stephen-hawking-humans-will-fall-behind-ai/116616

Comment author: timtyler 26 March 2012 07:20:53PM *  0 points [-]

That's a fair analysis of those two lines - though I didn't say "anyone ".

For evidence for "uncommon", I would cite the GLOBAL CATASTROPHIC RISKS SURVEY RESULTS. Presumably a survey of the ultra-paranoid. The figures they came up with were:

  • Number killed by molecular nanotech weapons: 5%.
  • Total killed by superintelligent AI: 5%.
  • Overall risk of extinction prior to 2100: 19%
Comment author: John_Maxwell_IV 26 March 2012 10:49:53PM 0 points [-]

Interesting data, thanks.