Vladimir_Nesov comments on Why safety is not safe - Less Wrong

48 Post author: rwallace 14 June 2009 05:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread. Show more comments above.

Comment author: Z_M_Davis 15 June 2009 05:56:50AM 8 points [-]

This is not how truly fundamental breakthroughs are made.

Hmm---now that you mention it, I realize my domain knowledge here is weak. How are truly fundamental breakthroughs made? I would guess that it depends on the kind of breakthrough---that there are some things that can be solved by a relatively small number of core insights (think Albert Einstein in the patent office) and some things that are big collective endeavors (think Human Genome Project). I would guess furthermore that in many ways AGI is more like the latter than the former, see below.

Why do you assume that AGI lies beyond the capabilities of any single intelligent person armed with a modern computer and a sufficiently unorthodox idea?

Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system. In either case, it takes more than an unorthodox idea.

Comment author: Vladimir_Nesov 15 June 2009 09:04:06AM 5 points [-]

Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system.

There is no law of Nature that says the consequences must be commensurate with their cause. We live in an unsupervised universe where a movement of butterfly's wings can determine the future of nations. You can't conclude that simply because the effect is expected to be vast, the cause ought to be at least prominent. This knowledge may only be found by a more mechanistic route.

Comment author: Z_M_Davis 15 June 2009 02:42:33PM *  1 point [-]

You're right in the sense that I shouldn't have used the words ought to be, but I think the example is still good. If other software engineering projects take more than one person, then it seems likely that AGI will too. Even if you suppose the AI does a lot of the work up to the foom, you still have to get the AI up to the point where it can recursively self-improve.