Vladimir_Nesov comments on Criticisms of intelligence explosion - Less Wrong

15 Post author: lukeprog 22 November 2011 05:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 23 November 2011 12:26:17AM *  15 points [-]

My problem with the focus on the idea of intelligence explosion is that it's too often presented as motivating the problem of FAI, but it's really not, it's a strategic consideration right there besides Hanson's malthusian ems, killer biotech and cognitive modification, one more thing to make the problem urgent, but still one among many.

What ultimately matters is implementing humane value (which involves figuring out what that is). The specific manner in which we lose ability to do so is immaterial. If intelligence explosion is close, humane value will lose control over the future quickly. If instead we change our nature through future cognitive modification tech, or by experimenting on uploads, then the grasp of humane value on the future will fail in orderly manner, slowly but just as irrevocably yielding control over to wherever the winds of value drift blow.

It's incorrect to predicate the importance, or urgency of gaining FAI-grade understanding of humane value on possibility of intelligence explosion. Other technologies that would allow value drift are for all purposes similarly close.

(That said, I do believe AGIs lead to intelligence explosions. This point is important to appreciate the impact and danger of AGI research, if complexity of humane value is understood, and to see one form that implementation of a hypothetical future theory of humane value could take.)

Comment author: RomeoStevens 24 November 2011 02:04:47AM 1 point [-]

The question of "can we rigorously define human values in a reflectively consistent way" doesn't need to have anything to do with AI or technological progress at all.

Comment author: Giles 24 November 2011 04:25:45AM 0 points [-]

This is a good point. I think there's one reason to give special attention to the intelligence explosion concept though... it's part of the proposed solution as well as one of the possible problems.

The two main ideas here are:

  • Recursive self-improvement is possible and powerful
  • Human values are fragile; "most" recursive self-improvers will very much not do what we want

These ideas seem to be central to the utliity-maximizing FAI concept.