Recently, I have been trying to reason why I belive what I belive (regarding AGI). However, it appears to me that there is not enough discussion around the arguments against AGI (more specifically AGI skeptisim). This might be of benefit, especially given that
Would this be because the arguments are either too weak or the Ai Safety is biased (understandably) towards imminent AGI?
This might also come out as a reaction from the recent advancements (such as o3) and the alarmant short timelines (less than 3 years). I want to understand the other sides points as well.
Based on what I found on the internet, the main arguments are roughly the following (not exact, given that most of the sources are either informal, such as Wikipedia) :
- outdated, usually statements from scientists made around 2010s before LLMs
- Ethics-ish arguments, that state dangers from AGI are just simple distractions from the real dangers of AI (racism, bias etc)
- Frontier Labs propaganda, simply stating that AGI is happening soon to keep their stakeholders happy and investments coming
- Cognitive Science arguments, stating that it is intracable to create a human level mind using a computer
What do people think? What are some good resources or researchers that might have a good counterpoint to the imminent AGI path?
Edit: Modified slightly to focus only on the AGI argument, rather than including safety implications as well.
One argument against is that I think it’s coming soon, and I have a 40 year history of frothing technological enthusiasm, often predicting things will arrive decades before they actually do. 😀