timtyler comments on Students asked to defend AGI danger update in favor of AGI riskiness - Less Wrong

3 Post author: lukeprog 18 October 2011 05:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eugine_Nier 19 October 2011 06:07:17AM *  3 points [-]

Some computer programs crash - just as some possible superintelligences would kill all humans.

No most computer programs crash, it's just that most people never see them because said programs are repeatedly tested and modified until they no longer crash before being shown to people (this process is called "debugging"). With a self-modifying AI this is a lot harder to do.

Comment author: timtyler 19 October 2011 05:34:12PM *  0 points [-]

Some computer programs crash - just as some possible superintelligences would kill all humans.

No *most" computer programs crash [...]

By "no", you apparently mean "yes".

With a self-modifying AI this is a lot harder to do.

Well, that is a completely different argument - and one that would appear to be in need of supporting evidence - since automated testing, linting and the ability to program in high-level languages are all improving simultaneously.

I am not aware of any evidence that real computer programs are getting more crash-prone with the passage of time.

Comment author: Eugine_Nier 19 October 2011 11:35:51PM 2 points [-]

With a self-modifying AI this is a lot harder to do.

Well, that is a completely different argument - and one that would appear to be in need of supporting evidence - since automated testing, linting and the ability to program in high-level languages are all improving simultaneously.

The point is that the first time you run the seed AI it will attempt to take over the world, so you don't have the luxury of debugging it.

Comment author: timtyler 20 October 2011 03:01:05PM *  1 point [-]

The point is that the first time you run the seed AI it will attempt to take over the world, so you don't have the luxury of debugging it.

That is not a very impressive argument, IMHO.

We will have better test harnesses by then - allowing such machines to be debugged.

Comment author: asr 20 October 2011 12:47:20AM 1 point [-]

Almost certainly, the first time you run the seed AI, it'll crash quickly. I think it's very unlikely that you construct a successful-enough-to-be-dangerous AI without a lot of mentally crippled ones first.

Comment author: wedrifid 20 October 2011 01:48:04PM 1 point [-]

Almost certainly, the first time you run the seed AI, it'll crash quickly. I think it's very unlikely that you construct a successful-enough-to-be-dangerous AI without a lot of mentally crippled ones first.

If so then we are all going to die. That is, if you have that level of buggy code then it is absurdly unlikely that the first time the "intelligence" part works at all it works well enough to be friendly. (And that scenario seems likely.)

Comment author: timtyler 20 October 2011 03:03:06PM *  0 points [-]

The first machine intellligences we build will be stupid ones.

By the time smarter ones are under developpment we will have other trustworthy smart machines on hand to help keep the newcomers in check.