John_Maxwell_IV comments on A Primer On Risks From AI - Less Wrong

15 Post author: XiXiDu 24 March 2012 02:32PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread. Show more comments above.

Comment author: John_Maxwell_IV 25 March 2012 06:05:43PM 3 points [-]

Testing machines may not be "easy" - but it isn't rocket science. You put the testee in a virtual world and test them there.

What if the testee realizes they are being tested and behaves differently than they would if unboxed? Security by obscurity doesn't work well even against humans, so it seems best to use schemes that work even if the testee knows everything about them.

Furthermore, do you think a group of monkeys could design a cage that would keep you trapped?

http://lesswrong.com/lw/qk/that_alien_message/

An escaped criminal on the run doesn't have much of chance of overtaking the whole of the rest of society and its technology.

Are there any historical cases of superintelligent escaped criminals? You sound awfully confident about a scenario that has no historical precedent.

Comment author: timtyler 25 March 2012 08:02:26PM *  -1 points [-]

Testing machines may not be "easy" - but it isn't rocket science. You put the testee in a virtual world and test them there.

What if the testee realizes they are being tested and behaves differently than they would if unboxed?

Then, if you identify that as being a problem. you redesign your test harness.

Security by obscurity doesn't work well even against humans, so it seems best to use schemes that work even if the testee knows everything about them.

Furthermore, do you think a group of monkeys could design a cage that would keep you trapped?

Probably not - but that isn't a terribly good analogy to any problem we are likely to face.

An escaped criminal on the run doesn't have much of chance of overtaking the whole of the rest of society and its technology.

Are there any historical cases of superintelligent escaped criminals?

Well, of course not - though I do seem to recall a tale of one General Zod.

You sound awfully confident about a scenario that has no historical precedent.

I'm doubting whether the situation with no historical precedent will ever come to pass. We have had escaped criminals in societies of their peers. In the future, we may still have some escaped criminals in societies of their peers - though hopefully a lot fewer.

What I don't think we are likely to have is an escaped superintelligent criminal in an unadvanced society. Instead, I expect that a society able to produce such an agent will already be quite advanced - and that society as a whole will be able to advance faster than any escaped criminals will be able to manage - due to having more resources, manpower, etc.

Comment author: John_Maxwell_IV 25 March 2012 08:33:57PM 2 points [-]

It sounds to me like you are favoring the "everything's going to be all right" conclusion quite heavily. You act like everything is going to be all right by default, and your arguments for why things will be all right aren't very sophisticated.

Then, if you identify that as being a problem. you redesign your test harness.

And we will certainly identify it as being a problem because humans know everything and they never make mistakes.

I'm doubting whether the situation with no historical precedent will ever come to pass.

I see, similar to how housing prices will never drop? Have you read up on black swans?

We are venturing into uncharted territory here. Historical precedents provide very weak information.

Comment author: timtyler 26 March 2012 11:22:43AM *  1 point [-]

I'm doubting whether the situation with no historical precedent will ever come to pass.

I see, similar to how housing prices will never drop?

No.

Have you read up on black swans?

Yes.

I don't think it is likely that the world will end in accidental apocalypse in the next century.

Few do - AFAICS - and the main proponents of the idea are usually selling something.

Comment author: John_Maxwell_IV 26 March 2012 06:54:35PM 0 points [-]

What level on the disagreement hierarchy would you rate this comment of yours?

http://www.paulgraham.com/disagree.html

It looks like mostly DH3 to me, with a splash of DH1 in implying that anyone who suggests that our future isn't guaranteed to be bright must be selling something.

There's a bit of DH4 in implying that this is an uncommon position, which implies very weakly that it's incorrect. I don't think this is a very uncommon position though:
* http://www.ted.com/talks/lang/en/martin_rees_asks_is_this_our_final_century.html
* http://www.ted.com/talks/stephen_petranek_counts_down_to_armageddon.html
* http://www.ted.com/talks/jared_diamond_on_why_societies_collapse.html
* http://www.wired.com/wired/archive/8.04/joy.html

And Stephen Hawking on AI:
* http://www.zdnet.com/news/stephen-hawking-humans-will-fall-behind-ai/116616

Comment author: timtyler 26 March 2012 07:20:53PM *  0 points [-]

That's a fair analysis of those two lines - though I didn't say "anyone ".

For evidence for "uncommon", I would cite the GLOBAL CATASTROPHIC RISKS SURVEY RESULTS. Presumably a survey of the ultra-paranoid. The figures they came up with were:

  • Number killed by molecular nanotech weapons: 5%.
  • Total killed by superintelligent AI: 5%.
  • Overall risk of extinction prior to 2100: 19%
Comment author: John_Maxwell_IV 26 March 2012 10:49:53PM 0 points [-]

Interesting data, thanks.