John_Maxwell_IV comments on A Primer On Risks From AI - Less Wrong

15 Post author: XiXiDu 24 March 2012 02:32PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread. Show more comments above.

Comment author: John_Maxwell_IV 25 March 2012 08:33:57PM 2 points [-]

It sounds to me like you are favoring the "everything's going to be all right" conclusion quite heavily. You act like everything is going to be all right by default, and your arguments for why things will be all right aren't very sophisticated.

Then, if you identify that as being a problem. you redesign your test harness.

And we will certainly identify it as being a problem because humans know everything and they never make mistakes.

I'm doubting whether the situation with no historical precedent will ever come to pass.

I see, similar to how housing prices will never drop? Have you read up on black swans?

We are venturing into uncharted territory here. Historical precedents provide very weak information.

Comment author: timtyler 26 March 2012 11:22:43AM *  1 point [-]

I'm doubting whether the situation with no historical precedent will ever come to pass.

I see, similar to how housing prices will never drop?

No.

Have you read up on black swans?

Yes.

I don't think it is likely that the world will end in accidental apocalypse in the next century.

Few do - AFAICS - and the main proponents of the idea are usually selling something.

Comment author: John_Maxwell_IV 26 March 2012 06:54:35PM 0 points [-]

What level on the disagreement hierarchy would you rate this comment of yours?

http://www.paulgraham.com/disagree.html

It looks like mostly DH3 to me, with a splash of DH1 in implying that anyone who suggests that our future isn't guaranteed to be bright must be selling something.

There's a bit of DH4 in implying that this is an uncommon position, which implies very weakly that it's incorrect. I don't think this is a very uncommon position though:
* http://www.ted.com/talks/lang/en/martin_rees_asks_is_this_our_final_century.html
* http://www.ted.com/talks/stephen_petranek_counts_down_to_armageddon.html
* http://www.ted.com/talks/jared_diamond_on_why_societies_collapse.html
* http://www.wired.com/wired/archive/8.04/joy.html

And Stephen Hawking on AI:
* http://www.zdnet.com/news/stephen-hawking-humans-will-fall-behind-ai/116616

Comment author: timtyler 26 March 2012 07:20:53PM *  0 points [-]

That's a fair analysis of those two lines - though I didn't say "anyone ".

For evidence for "uncommon", I would cite the GLOBAL CATASTROPHIC RISKS SURVEY RESULTS. Presumably a survey of the ultra-paranoid. The figures they came up with were:

  • Number killed by molecular nanotech weapons: 5%.
  • Total killed by superintelligent AI: 5%.
  • Overall risk of extinction prior to 2100: 19%
Comment author: John_Maxwell_IV 26 March 2012 10:49:53PM 0 points [-]

Interesting data, thanks.