Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
What reasons do I have to believe that some abstract optimization process could sprout capabilities like social engineering without them being hardcoded or a result of time-expensive interactions with its environment?
I admit I have no clue about machine learning or computer science and mathematics in general. So maybe I should ask, what reasons do I have to believe that Eliezer Yudkowsky has good reasons to believe that some algorithm could undergo explosive recursive self-improvement?
All I can imagine is that something might be able to look at a lot of data, like YouTube videos, and infer human language and social skills like persuasion. That sounds interesting, but...phew...is that even possible for a computationally bounded agent? I have absolutely no clue!
I approach the question this way: consider the set S of algorithms capable of creating intelligent systems.
Thus far, the only member of S we know about is natural selection... call that S1.
There are several possibilities: