Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
Do you mean to say that only something that approximates human intelligence can initiate an "AI takeoff"? If so, can you summarize your reasons for believing that?
What reasons do I have to believe that some abstract optimization process could sprout capabilities like social engineering without them being hardcoded or a result of time-expensive interactions with its environment?
I admit I have no clue about machine learning or computer science and mathematics in general. So maybe I should ask, what reasons do I have to believe that Eliezer Yudkowsky has good reasons to believe that some algorithm coul... (read more)