"Superhuman AI" as the term is generally used is a fixed reference standard, i.e. your average rationalist computer scientist circa 2013. This particular definition has meaning because if we posit that human beings are able to create an AGI, then a first generation superhuman AGI would be able to understand and modify its own source code, thereby starting the FOOM process. If human beings are not smart enough to write an AGI then this is a moot point. But if we are, then we can be sure that once that self-modifying AGI also reaches human-level capability, it will quickly surpass us in a singularity event.
So the point of whether IA advances humans faster or slower than AGI is a rather uninteresting point. All that matters is when a self-modifying AGI becomes more capable than its creators at the time of its inception.
As to your very last point, it is probably because the timescales for AI are much closer than IA. AI is basically a solvable software problem, and there are many supercompute clusters in the world that could are probably capable of running a superhuman AGI at real time speeds, if such a software existed. Significant IA, on the other hand, requires fundamental breakthroughs in hardware...
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I like to imagine that eventually we will be able to boil the counter-intuitive parts of quantum physics away into something more elegant. I keep coming back to the idea that every current interaction could theoretically be modeled as the interactions of variously polarized electromagnetic waves. Such as mass being caused by rotational acceleration of light, and charge being emergent from the cross-interactions of polarized photons. I doubt the idea really carves reality at the joints, but I think it's probably closer to accurate than the standard model, which is functional but patchworked, much like the predictive models used by astrologers prior to the acceptance of heliocentrism.