I was part of a group that ran a PhilPapers-style survey and metasurvey targeting NLP researchers who publish at venues like ACL. Results are here (Tweet-thread version). It didn't target AGI timelines, but had some other questions that could be of interest to people here:
- NLP is on a path to AGI: 58% agreed that Understanding the potential development of artificial general intelligence (AGI) and the benefits/risks associated with it should be a significant priority for NLP researchers.
- Related: 57% agreed that Recent developments in large-scale ML modeling (such as in language modeling and reinforcement learning) are significant steps toward the development of AGI.
- AGI could be revolutionary: 73% agreed that In this century, labor automation caused by advances in AI/ML could plausibly lead to economic restructuring and societal changes on at least the scale of the Industrial Revolution.
- AGI could be catastrophic: 36% agreed that It is plausible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war.
- 46% of women and 53% of URM respondents agreed.
- The comments suggested that people took a pretty wide range of interpretations to this, including things like OOD robustness failures leading to weapons launches.
- Few scaling maximalists: 17% agreed that Given resources (i.e., compute and data) that could come to exist this century, scaled-up implementations of established existing techniques will be sufficient to practically solve any important real-world problem or application in NLP.
- The metasurvey responses predicted that 47% would agree to this, so there are fewer scaling maximalists than people expected there to be.
- Optimism about ideas from cognitive science: 61% agreed that It is likely that at least one of the five most-cited systems in 2030 will take clear inspiration from specific, non-trivial results from the last 50 years of research into linguistics or cognitive science.
- This strikes me as very optimistic, since it's pretty clearly false about the most cited systems today.
- Optimism about the field: 87% agreed that On net, NLP research continuing into the future will have a positive impact on the world.
- 32% of respondents who agreed that NLP will have a positive future impact on society also agreed that there is a plausible risk of global catastrophe.
- Most NLP research is crap: 67% agreed that A majority of the research being published in NLP is of dubious scientific value.
Fair. For better or worse, a lot of this variation came from piloting—we got a lot of nudges from pilot participants to move toward framings that were perceived as controversial or up for debate.