Hum. Suppose that increasing the intelligence of an AI requires a series of insights, or searches through design space, or however you want to phrase it. The FOOM then seems to assume that each insight is of roughly equal difficulty, or at least that the difficulty does not increase as rapidly as does the intelligence. But it does not seem obvious that the jump from Arbitrary Intelligence Level 2 to 3 requires an insight of equal difficulty as the jump from 3 to 4. In fact, intuitively it seems that jumps from N to N+1 are easier than jumps from N+1 to N+2. (It is not immediately obvious to me what the human intelligence distribution implies about this. We don't even know, strictly speaking, that it's a bell curve, although it does seem to have a fat middle.) If, to take a hypothetical example, each jump doubles in difficulty but gives a linear increase in intelligence, then the process won't FOOM at all - it'll go asymptotic horizontally, albeit perhaps at a level much above a genius human's. Even if the difficulty increases only linearly while granting a linear increase in intelligence, that keeps the time required for each jump constant. That doesn't rule out arbitrarily intelligent AIs, but it does mean the increase doesn't show an asymptote. (Depending on the time constant, it could even be uninteresting. If it takes the AI ten years to generate the insights to increase its IQ by one point, and it starts at 100, then we'll be waiting a while.)
Now, neither of those possibilities is especially likely. But if we take the increase in difficulty per level as x, and the increase in intelligence per level as y, and the time to the next insight as proportional to (x/y), then what reason do we have to believe that x < y? (Or, if they're roughly equal, that the constant of proportionality is small.)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Two objections to this: Firstly you have to extrapolate from the chimp-to-human range and into superintelligence range. The gradient may not be the same in the two. Second, it seems to me that the more intelligent humans are, the more "the other humans in my tribe" becomes the dominant part of your environment; this leads to increased returns to intelligence, and consequently you do get an increasing optimisation pressure.
To your first objection, I agree that "the gradient may not be the same in the two," when you are talking about chimp-to-human growth and human-to-superintelligence growth. But Eliezer's stated reason mostly applies to the areas near human intelligence, as I said. There is no consensus on how far the "steep" area extends, so I think your doubt is justified.
Your second objection also sounds reasonable to me, but I don't know enough about evolution to confidently endorse or dispute it. To me, this sounds similar to a point that Tim Tyler tries to make repeatedly in this sequence, but I haven't investigated his views thoroughly. I believe his stance is as follows: since a human selects a mate using their brain, and intelligence is so necessary for human survival, and sexual organisms want to pick fit mates, there has been a nontrivial feedback loop caused by humans using their intelligence to be good at selecting intelligent mates. Do you endorse this? (I am not sure, myself.)