Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
I do not doubt that humans can create superhuman AI, but I don't know how likely self-optimizing AI is. I am aware of the arguments. But all those arguments rather seem to me like theoretical possibilities, just like universal Turing machines could do everything a modern PC could do and much more. But in reality that just won't work because we don't have infinite tapes, infinite time...
Applying intelligence to itself effectively seems problematic. I might just have to think about it in more detail. But intutively it seems that you need to apply a lot more energy to get a bit more complexity. That is, humans can create superhuman intelligence but you need a lot of humans working on it for a long time and have a lot of luck stumbling upon unknown unknowns.
It is argued that the mind-design space must be large if evolution could stumble upon general intelligence. I am not sure how valid that argument is, but even if that is the case, shouldn't the mind-design space reduce dramatically with every iteration and therefore demand a lot more time to stumble upon new solutions?
Another problem I have is that I don't get why people here perceive intelligence to be something proactive with respect to itself. No doubt there exists some important difference between evolutionary processes and intelligence. But if you apply intelligence to itself, this difference seems to diminish. How so? Because intelligence is no solution in itself, it is merely an effective searchlight for unknown unknowns. But who knows that the brightness of the light increases proportionally with the distance between unknown unknowns? To have an intelligence explosion the light would have to reach out much farther with each generation than the increase of the distance between unknown unknowns...I just don't see that to be a reasonable assumption.
What appears to be a point against the idea:
This is from: Is there an Elegant Universal Theory of Prediction?