Giles comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 13 March 2012 05:43:11PM 3 points [-]

Good post.

You seem to excessively focus on recursive self-improvement to the exclusion of other hard takeoff scenarios, however. As Eliezer noted,

RSI is the biggest, most interesting, hardest-to-analyze, sharpest break-with-the-past contributing to the notion of a "hard takeoff" aka "AI go FOOM", but it's nowhere near being the only such factor. The advent of human intelligence was a discontinuity with the past even without RSI...

That post mentions several other hard takeoff scenarios, e.g.:

  • Even if an AI's self-improvement efforts quickly hit a wall, a small number of crucial optimizations, or the capture of a particular important resource, will provide it a massive intelligence advantage over humans. (Has evolutionary precedent in that the genetic differences between humans and chimps are relatively small. )
  • Parallel hardware overhang: if there's much more hardware available than it takes to run an AI, an AI could expand itself and thus become more intelligent by simply "growing a bigger brain", or create an entire society of co-operating AIs.
  • Serial hardware overhang: an AI running on processors with more serial speed than neurons could be able to e.g. process longer chains of inference instead of relying on cache lookups.

(Also a couple more, but I found those a little vague and couldn't come up with a good way to summarize them in a few of sentences.)

Comment author: Giles 17 March 2012 08:01:56PM 1 point [-]

Ah, thanks for making this point - I notice I've recently been treating "recursive self improvement and "hard takeoff" as more or less interchangeable concepts. I don't think I need to update on this, but I'll try and use my language more carefully at least.