Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
I agree. To be clear, my confusion is mainly about the possibility of explosive recursive self-improvement. I have a hard time to accept that it is very likely (e.g. easily larger than a 1% probability), that such a thing is practically and effectively possible, or at least that we will be able to come up with an algorithm that is capable of quickly surpassing a human set of skills without huge amounts of hard-coded intelligence. I am skeptical that we will be able to quickly approach such a problem, that it won't be a slow and incremental evolution slowly approaching superhuman intelligence.
As I see it, the more abstract a seed AI is, the closer it is to something like AIXI, the more time it will need to reach human level intelligence, let alone superhuman intelligence. The less abstract a seed AI is, the more work we will have to put into painstakingly hard-coding it to be able to help us improve its intelligence even further. And in any case, I don't think that dramatic quantum leaps in intelligence are a matter of speed improvements or the accumulation of expert systems. It might very well need some genuine novelty in the form of the discovery of unknown unknowns.
What is intelligence? Take a chess computer, it is arguably intelligent. It is a narrow form of intelligence. But what is it that differentiates narrow intelligence from general intelligence? Is it a conglomerate of expertise, some sort of conceptual revolution or a special kind of expert system that is missing? My point is, why haven't we seen any of our expert systems come up with true novelty in their field, something no human has thought of before? The only algorithms that have so far been capable of achieving this have been of evolutionary nature, not what we would label artificial intelligence.
Evolution was able to come up with altruism, something that works two levels above the individual and one level above society. So far we haven't been able to show such ingenuity by incorporating successes that are not evident from an individual or even societal position.
Your point is a good one, I am just saying that the gap between intelligence and evolution isn't that big here.
Yes, but evolution makes better use of dumb luck by being blindfolded. This seems to be a disadvantage but actually allows it to discover unknown unknowns that are hidden where no intelligent, rational agent would suspect them and therefore would never find them given evidence based exploration.
A minor quibble:
Never is a very strong word and it isn't obvious that evolution will actually find things that intelligence would not. The general scale that evolution gets to work at is much longer term than intelligence has so far. If intelligence has as much ... (read more)