The concept of recursive self-improvement is not an accepted idea outside of the futurist community. It just does not seem right in some fashion to some people. I am one of those people, so I'm going to try and explain the kind of instinctive skepticism I have towards it. It hinges on the difference between two sorts of values, whose difference I have not seen made explicit before (although likely it has somewhere). This difference is that of the between a concrete and contextual value.
So lets run down the argument so I can pin down where it goes wrong in my view.
- There is a value called intelligence that roughly correlates with the ability to achieve goals in the world (if it does not then we don't care about intelligence explosions as they will have negligible impact on the real worldTM)
- All things being equal a system with more compute power will be more capable than one with less (assuming they can get the requisite power supply). Similarly systems that have algorithms with better run time complexities will be more capable.
- Computers will be able to do things to increase the values in 2. Therefore they will form a feedback loop and become progressively more and more capable at an ever increasing rate.
The point where I become unstuck is in the phrase "all things being equal". Especially what the "all" stands for. Let me run down a similar argument for wealth.
- There is a value called wealth that roughly correlates with the ability to acquire goods and services from other people.
- All things being equal a person with more money will be more wealthy than one with less.
- You are able to put your money in the bank and get compound interest on your money, so your wealth should be exponential in time (ignoring taxes).
3 can be wrong in this, dependent upon the rate of interest and the rate of inflation. Because of inflation, each dollar you have in the future is less able to buy goods. That is the argument in 3 ignores that at different times and in different environments money is worth different amounts of goods. Hyper inflation is a stark example of this. So the "all things being equal" references the current time and state of the world and 3 breaks that assumption by allowing time and the world to change.
Why doesn't the argument work for wealth, but you can get stable recursive growth on neutrons in a reactor? It is because wealth is a contextual value, it depends on the world around you, as your money grows with compound interest the world changes it to make it less valuable without touching your money at all. Nothing can change the number of neutrons in your reactor without physically interacting with them or the reactor in some way. The neutron density value is concrete and containable, and you can do sensible maths with it.
I'd argue that intelligence has a contextual nature as well. A simple example would be a computer chess tournament with a fixed algorithm that used as much resources as you threw at it. Say you manage to increase the resources for your team steadily by 10 MIPs per year, you will not win more chess games if another team is expanding their capabilities by 20 MIPs per year. That is despite an increase in raw computing ability it will not have an increase in achieving the goal of winning chess. Another possible example of the contextual nature of intelligence is the case where a systems ability to perform well in the world is affected by other people knowing its source code, and using it to predict and counter its moves.
From the view of intelligence as a contextual value, current discussion of recursive self-improvement seems overly simplistic. We need to make explicit the important things in the world that intelligence might depend upon and then see if we can model the processes such that we still get FOOMs.
Edit: Another example of the an intelligences effectiveness being contextual is the role of knowledge in performing tasks. Knowledge can have a expiration date after which it becomes less useful. Consider knowledge about the current english idioms usefulness for writing convincing essays, or the current bacterial population when trying to develop nano-machines to fight them. So you might have an atomically identical intelligence whose effectiveness varies dependent upon the freshness of the knowledge. So there might be conflicts between expending resources on improving processing power or algorithms and keeping knowledge fresh, when trying to shape the future. It is possible, but unlikely, that an untruth you believe will become true in time (say your estimate for the population of a city was too low but its growth took it to your belief), but as there are more ways to be wrong than right, knowledge is likely to degrade with time.
Well, some things change, but the examples we have of general intelligence are all cross-domain enough to handle such change. Human beings are more intelligent than chimps; no plausible change in the environment that leaves both humans and chimps alive will result in chimps developing more optimization power than humans. The scientific community in the modern world does a better job of focusing human intelligence on problem-solving than does a hunter-gatherer religion; no change in the environment that leaves our scientists alive will allow our technology to be surpassed by the combined forces of animist tribes from the African jungles.