The concept of recursive self-improvement is not an accepted idea outside of the futurist community. It just does not seem right in some fashion to some people. I am one of those people, so I'm going to try and explain the kind of instinctive skepticism I have towards it. It hinges on the difference between two sorts of values, whose difference I have not seen made explicit before (although likely it has somewhere). This difference is that of the between a concrete and contextual value.
So lets run down the argument so I can pin down where it goes wrong in my view.
- There is a value called intelligence that roughly correlates with the ability to achieve goals in the world (if it does not then we don't care about intelligence explosions as they will have negligible impact on the real worldTM)
- All things being equal a system with more compute power will be more capable than one with less (assuming they can get the requisite power supply). Similarly systems that have algorithms with better run time complexities will be more capable.
- Computers will be able to do things to increase the values in 2. Therefore they will form a feedback loop and become progressively more and more capable at an ever increasing rate.
The point where I become unstuck is in the phrase "all things being equal". Especially what the "all" stands for. Let me run down a similar argument for wealth.
- There is a value called wealth that roughly correlates with the ability to acquire goods and services from other people.
- All things being equal a person with more money will be more wealthy than one with less.
- You are able to put your money in the bank and get compound interest on your money, so your wealth should be exponential in time (ignoring taxes).
3 can be wrong in this, dependent upon the rate of interest and the rate of inflation. Because of inflation, each dollar you have in the future is less able to buy goods. That is the argument in 3 ignores that at different times and in different environments money is worth different amounts of goods. Hyper inflation is a stark example of this. So the "all things being equal" references the current time and state of the world and 3 breaks that assumption by allowing time and the world to change.
Why doesn't the argument work for wealth, but you can get stable recursive growth on neutrons in a reactor? It is because wealth is a contextual value, it depends on the world around you, as your money grows with compound interest the world changes it to make it less valuable without touching your money at all. Nothing can change the number of neutrons in your reactor without physically interacting with them or the reactor in some way. The neutron density value is concrete and containable, and you can do sensible maths with it.
I'd argue that intelligence has a contextual nature as well. A simple example would be a computer chess tournament with a fixed algorithm that used as much resources as you threw at it. Say you manage to increase the resources for your team steadily by 10 MIPs per year, you will not win more chess games if another team is expanding their capabilities by 20 MIPs per year. That is despite an increase in raw computing ability it will not have an increase in achieving the goal of winning chess. Another possible example of the contextual nature of intelligence is the case where a systems ability to perform well in the world is affected by other people knowing its source code, and using it to predict and counter its moves.
From the view of intelligence as a contextual value, current discussion of recursive self-improvement seems overly simplistic. We need to make explicit the important things in the world that intelligence might depend upon and then see if we can model the processes such that we still get FOOMs.
Edit: Another example of the an intelligences effectiveness being contextual is the role of knowledge in performing tasks. Knowledge can have a expiration date after which it becomes less useful. Consider knowledge about the current english idioms usefulness for writing convincing essays, or the current bacterial population when trying to develop nano-machines to fight them. So you might have an atomically identical intelligence whose effectiveness varies dependent upon the freshness of the knowledge. So there might be conflicts between expending resources on improving processing power or algorithms and keeping knowledge fresh, when trying to shape the future. It is possible, but unlikely, that an untruth you believe will become true in time (say your estimate for the population of a city was too low but its growth took it to your belief), but as there are more ways to be wrong than right, knowledge is likely to degrade with time.
What makes the intelligence cycle zero-sum? What devalues the 10 MIPs advance? After all, the goal is not to earn a living with the prize money brought in by an Incredible Digital Turk, but to design superior probability-space searching programming algorithms, using chess as a particular challenge, then to use that to solve other problems which are not moving targets, like machine vision or materials analysis or...alright, I admit to ignorance here. I just suspect that not all goals for intelligence involve competing with/modeling other growing intelligences.
Technological advances (which seem similar enough to "increases in the ability to achieve goals in the world" to be worthy of a tentative analogy) may help some (the 20 MIPs crowd) disproportionately, but don't they frequently still help everyone who implements them? If people in Africa get cellphones, but people in Europe get supercomputers, all people are still getting an economic advantage relative to their previous selves; they can use resources better than they could previously.
Also, if point 3'' is phrased equally as vaguely as 3' (perhaps: "Wealthy people are able to do things to increase the values in 2''."), then it seems much more reasonable. Wealth can be used to obtain information and contacts that giver greater relative wealth-growing advantage, such as "Don't just put it all in the bank," or "My cousin's company is about to announce higher-than-expected earnings," or even "Global hyperinflation is coming, transfer assets to precious metals." Conversely (I think), if point 3' had a formulation sufficiently specific to be similarly limited ("Computers can keep having more RAM installed and thus will have more intelligence over time."), I don't see how that would be an indictment of the general case. What am I missing?