Wow, that's all kinds of crazy. I'm not sure how much as I'm not a mathematical physicist - MWI and quantum mechanics implied by Newton? Really? - but one big flag for me is pg187-188 where he doggedly insists that the universe is closed, although as far as I know the current cosmological consensus is the opposite, and I trust them a heck of a lot more than a fellow who tries to prove his Christianity with his physics.
(This is actually convenient for me: a few weeks ago I was wondering on IRC what the current status of Tipler's theories were, given that he had clearly stated they were valid only if the universe were closed and if the Higgs boson was within certain values, IIRC, but I was feeling too lazy to look it all up.)
And the extraction of a transcendent system of ethics from a Feynman quote...
...A moment’s thought will convince the reader that Feynman has described not only the process of science, but the process of rationality itself. Notice that the bold-faced words are all moral imperatives. Science, in other words, is fundamentally based on ethics. More generally, rational thought itself is based on ethics. It is based on a particular ethical system. A true hu
The way I look at it, it's 'if such can survive peer review, what do people make of things whose authors either did not try to pass peer review or could not pass peer review? They probably think pretty poorly of them.'
I'm very grateful to the undergraduate professor of mine that introduced me to Penrose and Tipler as a freshman. I think at that time I was on the cusp of falling into a similar failure state, and reading Shadows of the Mind and The Physics of Immortality shocked me out of what would have been a very long dogmatic slumber indeed.
Daniel Dennett's "The Mystery of David Chalmers" quickly dismissed the Singularity without really saying why:
My reactions to the first thirty-odd pages did not change my mind about the topic, aside from provoking the following judgment, perhaps worth passing along: thinking about the Singularity is a singularly imprudent pastime, in spite of its air of cautious foresight, since it deflects our attention away from a much, much more serious threat, which is already upon us, and shows no sign of being an idle fantasy: we are becoming, or have become, enslaved by something much less wonderful than the Singularity: the internet.
and then spent the rest of his paper trying to figure out why Chalmers isn't a type-A materialist.
By the way, procrastinating on internet may be the #1 factor that delays Singularity. Before we make a first machine capable of programming better machines, we may make dozen machines capable of distracting us so much that we will never accomplish anything beyond that point.
People need cool names to treat ideas seriously, so let's call this apex of human invention "Procrastinarity". Formally, the better tools people can make, the more distraction they provide, so there is a limit for a human civilization where there is so much distraction that no one is able to focus on making better tools. (More precisely: even if some individuals can focus at this point, they will not find enough support, friends, mentors, etc., so without the necessary scientific infrastructure they cannot meaningfully contribute to human progress.) This point is called Procrastinarity and all the real human progress stops here. A natural disaster may eventually reduce humanity to pre-Procrastinarity levels, but if humans overcome these problems, they will just achieve another Procrastinarity phase. We will reach the first Procrastinarity in the following 30 years with probability 50%.
There's another such curve, incidentally - I've been reading up on scientific careers, and there's solid-looking evidence that a modern scientist makes his better discoveries about a decade later than in the early 1900s. This is a problem because productivity drops off in the 40s and is pretty small in the 50s and later, and this has remained constant (despite the small improvements in longevity over the 20th century).
So if your discoveries only really begin in your late 20s and you face a deadline of your 40s, and each century we lose a decade, this suggests within 2 centuries, most of a scientist's career will be spent being trained, learning, helping out on other experiments, and in general just catching up!
We might call this the PhDalarity - the rate at which graduate and post-graduate experience is needed before one can make a major discovery.
Sue's article is here: She won’t be me.
Robin's article is here: Meet the New Conflict, Same as the Old Conflict - see also O.B. blog post
Francis's article is here: A brain in a vat cannot break out: why the singularity must be extended, embedded and embodied.
Marcus Hutter: Can Intelligence Explode?.
I thought the idea that machine intelligence would be developed in virtual worlds on safety grounds was pretty daft. I explained this at the time:
IMO, people want machine intelligence to help them to attain their goals. Machines can't do that if they are isolated off in virtual worlds. Sure there will be test harnesses - but it seems rather unlikely that we will keep these things under extensive restraint on grounds of sheer paranoia - that would stop us from taking advantage of them.
However, Francis's objections to virtual worlds seem even more silly to me. I've been hearing that simulations aren't real for decades now - and I still don't really understand why people get into a muddle over this issue.
Brief overview of Goedel machines; sort of a rebuke of other authors for ignoring the optimality results for them and AIXI etc.
Simultaneously, our non-universal but still rather general fast deep/ recurrent neural networks have already started to outperform traditional pre-programmed methods: they recently collected a string of 1st ranks in many important visual pattern recognition benchmarks, e.g. Graves & Schmidhuber (2009); Ciresan et al. (2011): IJCNN traffic sign competition, NORB, CIFAR10, MNIST, three ICDAR handwriting competitions. Here we greatly profit from recent advances in computing hardware, using GPUs (mini-supercomputers normally used for video games) 100 times faster than today’s CPU cores, and a million times faster than PCs of 20 years ago, complementing the recent above-mentioned progress in the theory of mathematically optimal universal problem solvers.
On falsified predictions of AI progress:
...I feel that after 10,000 years of civilization there is no need to justify pessimism through comparatively recent over-optimistic and self-serving predictions (1960s: ‘only 10 instead of 100 years needed to build AIs’) by a few early AI enthusiast
Similar theme from Hutter's paper:
Will AIXI replicate itself or procreate? Likely yes, if AIXI believes that clones or descendants are useful for its own goals.
If AIXI had the option of creating an AIXI (which by definition has the goal of maximizing its own rewards), or creating a different AI (non-AIXI) that had the goal of serving the goals of its creator instead, surely it would choose the latter option. If AIXI is the pinnacle of intelligence (as Hutter claims), and an AIXI wouldn't build another AIXI, why should we? Because we're just too dumb?
Pretty good overview of the AI boxing problem with respect to covert channels; possibly the first time I've see Eliezer's experiments cited, or Stuart Armstrong's Dr. Evil anthropic attack.
While the outlined informational hazards comprise over a dozen categories and are beyond the scope of this paper, it is easy to see how mental state of a person could be stressed to an unstable state. For example a religious guard could be informed of all the (unknown to him) contradictions in the main text of his religion causing him to question his beliefs and the purpose of life.
Given the length of the paper, I kind of expected there to be no mention of homomorphic encryption, as the boxing proposal that seems most viable, but to my surprise I read
The source code and hardware configuration of the system needs to be obfuscated (Yampolskiy & Govindaraju, 2007a) and important modules of the program should be provided only in the homomorphicly encrypted (Gentry, 2009) form, meaning that it could be used for computation or self-improvement (Hall, 2007), but not for self-analysis.
Important modules? Er, why not just the whole thing? If you have homomorphic encryption working and proven correct, the other measures may add a little security, but not a whole lot.
Our reason for placing the Singularity within the lifetimes of practi- cally everyone now living who is not already retired, is the fact that our supercomputers already have sufficient power to run a Singularity level program (Tipler, 2007). We lack not the hardware, but the soft- ware. Moore’s Law insures that today’s fastest supercomputer speed will be standard laptop computer speed in roughly twenty years (Tipler, 1994).
Really? I was unaware that Moore's law was an actual physical law. Our state of the art has already hit the absolute physical limit of transistor design - we have single atom transistors in the lab. So, if you'll forgive me, I'll be taking the claim of, "Moore's law ensures that today's fastest supercomputer speed will be the standard laptop computer speed in 20 years with a bit of salt."
Now, perhaps we'll have some other technology that allows laptops twenty years hence to be as powerful as supercomputers today. But to just handwave that enormous engineering problem away by saying, "Moore's law will take care of it," is fuzzy thinking of worst sort.
I like Goertzel's succinct explanation of the idea behind Moore's Law of Mad Science:
...as technology advances, it is possible for people to create more and more destruction using less and less money, education and intelligence.
Also, his succinct explanation of why Friendly AI is so hard:
The practical realization of [Friendly AI] seems likely to require astounding breakthroughs in mathematics and science — whereas it seems plausible that human-level AI, molecular assemblers and the synthesis of novel organisms can be achieved via a series of moderate-level breakthroughs alternating with ‘normal science and engineering.’
Another choice quote that succinctly makes a key point I find myself making all the time:
if the US stopped developing AI, synthetic biology and nanotech next year, China and Russia would most likely interpret this as a fantastic economic and political opportunity, rather than as an example to be imitated.
His proposal for Nanny AI, however, appears to be FAI-complete.
Also, it is strange that despite paragraphs like this:
we haven’t needed an AI Nanny so far, because we haven’t had sufficiently powerful and destructive technologies. And now, these same technologies that may necessitate the creation of an AI Nanny, also may provide the means of creating it.
...he does not anywhere cite Bostrom (2004).
A quote from Dennett's article, on the topic of consciousness:
...‘One central problem,’ Chalmers tells us, ‘is that consciousness seems to be a further fact about conscious systems’ (p. 43) over and above all the facts about their structure, internal processes and hence behavioral competences and weaknesses. He is right, so long as we put the emphasis on ‘seems’. There does seem to be a further fact to be determined, one way or another, about whether or not anybody is actually conscious or a perfect (philosopher’s) zombie. This is what I have called the Zom
..."What if, as Vernor Vinge proposed, exponentially accelerating science and technology are rushing us into a Singularity (Vinge, 1986; 1993), what I have called the Spike? Technological time will be neither an arrow nor a cycle (in Stephen Jay Gould’s phrase), but a series of upwardly accelerating logistical S-curves, each supplanting the one before it as it flattens out. Then there’s no pattern of reasoned expectation to be mapped, no knowable Chernobyl or Fukushima Daiichi to deplore in advance. Merely - opacity."
...G. H
In "Leakproofing..."
"To reiterate, only safe questions with two possible answers of even likelihood which are independently computable by people should be submitted to the AI."
Oh come ON. I can see 'independently computable', but requiring single bit responses that have been carefully balanced so we have no information to distinguish one from the other? You could always construct multiple questions to extract multiple bits, so that's no real loss; and with awareness of Bayes' theorem, getting an exact probability balance is essentially impossible on any question we'd actually care about.
In my opinion, the most relevant article was from Drew McDermott, and I'm surprised that such an emphasis on analyzing the computational complexity of approaches to 'friendliness' and self-improving AI has not been more common. For that matter, I think computational complexity has more to tell us about cognition, intelligence, and friendliness in general, not just in the special case of a self-improving optimization/learning algorithms, and could completely modify the foundational assumptions underlying ideas about intelligence/cognition and the singulari...
I wish I could read the Dennett article online. If Chalmers has a philosophical nemesis it has to be Dennett. Though he probably sees it otherwise, I contend that Dennett's hard materialism is loosing ground daily in the academic and philosophical mainstream even as Chalmers' non-reductive functionalism gains in appreciation. (Look at Giulio Tononi's celebrated IIT theory of consciousness with its attendant panpsychism for just one example. And that's in the hard sciences, not philosophy.)
I'm ascertaining from the comments here that Dennett is no fan of t...
Many of those people are believers who are already completely sold on the idea of a technological singularity. I hope some sort of critical examination is forthcoming as well.
Schmidhuber, Hutter and Goertzel might be called experts. But I dare to argue that statements like "progress towards self-improving AIs is already substantially beyond what many futurists and philosophers are aware of" are almost certainly bullshit.
...has finally been published.
Contents:
The issue consists of responses to Chalmers (2010). Future volumes will contain additional articles from Shulman & Bostrom, Igor Aleksander, Richard Brown, Ray Kurzweil, Pamela McCorduck, Chris Nunn, Arkady Plotnitsky, Jesse Prinz, Susan Schneider, Murray Shanahan, Burt Voorhees, and a response from Chalmers.
McDermott's chapter should be supplemented with this, which he says he didn't have space for in his JCS article.