Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
AI can beat humans at chess, autonomously generate functional genomics hypotheses, discover laws of physics on its own, create original, modern music, identify pedestrians and avoid collisions, answer questions posed in natural language, transcribe and translate spoken language, recognize images of handwritten, typewritten or printed text, produce human speech, traverse difficult terrain...
There seems to be a lot of progress in computer science but it doesn't tell us much about the probability, let alone timescale, of artificial general intelligence undergoing explosive recursive self-improvement. Do we even know what evidence we are looking for, would we recognize it?
How can we tell when we know enough to build a seed AI that can sprout superhuman skills, that are not hardcoded, as diverse as social engineering, from within a box? How do we even tell such a thing is possible in principle? What evidence could convince someone that such a thing is possible or impossible?
Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar at least and then wait for the environment to provide a lot of feedback.
So even if we're talking about the emulation of a grown up mind it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?
Can we even attempt to imagine what is wrong about a boxed emulation of a human toddler that makes it unable to become a master of social engineering in a very short time?
Can we imagine what is missing that would enable one of the existing expert systems to quickly evolve vastly superhuman capabilities in its narrow area of expertise?
If we are completely clueless about how a seed AI with the potential of becoming superhuman intelligent could be possible, how could we possible update our probability estimates by looking at technological progress? There is a lot of technological progress, even in the field of AI, but it doesn't seem to tell us much about AI going FOOM.
Good points. That list of AI abilities alone is worth the upvote.