...has finally been published.
Contents:
- Uziel Awret - Introduction
- Susan Blackmore - She Won’t Be Me
- Damien Broderick - Terrible Angels: The Singularity and Science Fiction
- Barry Dainton - On Singularities and Simulations
- Daniel Dennett - The Mystery of David Chalmers
- Ben Goertzel - Should Humanity Build a Global AI Nanny to Delay the Singularity Until It’s Better Understood?
- Susan Greenfield - The Singularity: Commentary on David Chalmers
- Robin Hanson - Meet the New Conflict, Same as the Old Conflict
- Francis Heylighen - Brain in a Vat Cannot Break Out
- Marcus Hutter - Can Intelligence Explode?
- Drew McDermott - Response to ‘The Singularity’ by David Chalmers [this link is a McDermott-corrected version, and therefore preferred to the version that was published in JCS]
- Jurgen Schmidhuber - Philosophers & Futurists, Catch Up!
- Frank Tipler - Inevitable Existence and Inevitable Goodness of the Singularity
- Roman Yampolskiy - Leakproofing the Singularity: Artificial Intelligence Confinement Problem
The issue consists of responses to Chalmers (2010). Future volumes will contain additional articles from Shulman & Bostrom, Igor Aleksander, Richard Brown, Ray Kurzweil, Pamela McCorduck, Chris Nunn, Arkady Plotnitsky, Jesse Prinz, Susan Schneider, Murray Shanahan, Burt Voorhees, and a response from Chalmers.
McDermott's chapter should be supplemented with this, which he says he didn't have space for in his JCS article.
A Gödel machine, if one were to exist, surely wouldn't do something so blatantly stupid as posting to the Internet a "recipe for practically feasible self-improving Gödel machines or AIs in form of code into which one can plug arbitrary utility functions". Why can't humanity aspire to this rather minimal standard of intelligence and rationality?
Similar theme from Hutter's paper:
If AIXI had the option of creating an AIXI (which by definition has the goal of maximizing its own rewards), or creating a different AI (non-AIXI) that had the goal of serving the goals of its creator instead, surely it would choose the latter option. If AIXI is the pinnacle of intelligence (as Hutter claims), and an AIXI wouldn't build another AIXI, why should we? Because we're just too dumb?