sfb comments on What I would like the SIAI to publish - Less Wrong

27 Post author: XiXiDu 01 November 2010 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: sfb 02 November 2010 06:22:14PM 6 points [-]

When programmers code faulty software then it usually fails to do its job.

It often does it's job, but only in perfect conditions, or only once per restart, or with unwanted side effects, or while taking too long or too many resources or requiring too many permissions, or not keeping track that it isn't doing anything except it's job.

Buffer overflows for instance, are one of the bigger security failure causes, and are only possible because the software works well enough to be put into production while still having the fault present.

In fact, all production software that we see which has faults (a lot) works well enough to be put into production with those faults.

What you are suggesting is that humans succeed at creating the seed for an artificial intelligence with the incentive necessary to correct its own errors.

I think he's suggesting that humans will think we have succeeded at that, while not actually doing so (rigorously and without room for error).