Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I've merely read all of FHI, most of MIRI, half of AIMA, Paul's blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don't code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system's cognitive abilities can achieve. I have also not seen very robust evidence that would countenance the hypothesis of a fast takeoff.
Beware the Dunning–Kruger effect.
Looking at the big picture, you could also say that there convincing evidence for a cap on the lifespan of a biological organism. Heck, some trees have been alive for over 10,000 years! Yet, once you look at the nitty-gritty details of biomedical research, it becomes clear that even adding just a few decades to the human lifespan is a very hard problem and researchers still largely don't know how to solve it.
It's the same for AGI. Maybe truly super-human AGI is physically impossible due to complexity reasons, but even if it is possible, developing it is a very hard problem and researchers still largely don't know how to solve it.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Is it reasonable to say that what really matters is whether there's a fast or slow takeoff? A slow takeoff or no takeoff may limit us to EA for the indefinite future, and fast takeoff means transhumanism and immortality are probably conditional on and subsequent to threading the narrow eye of the FAI needle.
See the link with a flowchart on 12.