The unfalsifiable belief in (doomsday) superintelligence scenarios?
Having read Bostrom's superintelligence book and a couple of adjacent papers by him and Yudkowsky. There is no clear line of argument from beginning to end but rather a disjunctive list of possibilities that all lead to similar extinction events. This leads the entire theory to not be falsifiable, cut...
What are the most comprehensive arguments for paths to superintellligence?
My list (please tell me if there is a more comprehensive argument for a certain path or if there is a path that I missed).
- Whole brain emulation (quite old, 2008, but comprehensive, >100 pages)
- Sandberg, A. & Bostrom, N. (2008): Whole Brain Emulation: A Roadmap, Technical Report #2008‐3, Future of Humanity Institute, Oxford University
- Artificial Evolution (Not very comprehensive, only p. 11-31 actually discuss artificial evolution)
- Chalmers, D. J. (2016). The singularity: A philosophical analysis. Science fiction and philosophy: From time travel to superintelligence, 171-224.
- Forecasting direct Programming (in tandem with hardware improvement) from empirical grounds
- https://www.lesswrong.com/s/B9Qc8ifidAtDpsuu8 (A WIP)
- Brain-computer interface
- Can't find much of a roadmap similar to whole
... (read more)