Isolation is trickier than it sounds. If AI is created once, then we can assume that humanity is an AI-creating species. What constraints on tech, action, and/or intelligence would be necessary to guarantee that no one makes an AI in what was supposed to be a safe-for-humans region?
Right. I'm often asked, "Why not just keep the AI in a box, with no internet connection and no motors with which to move itself?"
Eliezer's experiments with AI-boxing suggest the AI would escape anyway, but there is a stronger reply.
If we've created a superintelligence and put it in a box, that means that others on the planet are just about capable of creating a superintelligence, too. What are you going to do? Ensure that every superintelligence everyone creates is properly boxed? I think not.
Before long, the USA or China or whoever is going to think that their superintelligence is properly constrained and loyal, and release it into the wild in an effort at world domination. You can't just keep boxing AIs forever.
"we can still emulate it if it is mechanical."
right, but how many more orders of magnitude of hardware do we need in this case? this depends on what level of abstraction is sufficient. isn't it the case that if intelligence relies on the base level and has no useful higher level abstractions the amount of computation needed would be absurd (assuming the base level is computable at all)?
right, but how many more orders of magnitude of hardware do we need in this case?
Probably a few less. This OB post explains how a good deal of the brain's complexity might be mechanical work to increase signal robustness. Cooled supercomputers with failure rates of 1 in 10^20 (or whatever the actual rate is) won't need to simulate the parts of the brain that error-correct or maintain operation during sneezes or bumps on the head.
I think this analysis assumes or emphasizes a false distinction between humans and "AI". For example, Searle's Room is an artificial intelligence built partly out of a human. It is easy to imagine intelligences built strictly out of humans, without paperwork. When humans behave like humans, we naturally form supervening entities (groups, tribes, memes).
I tried to rephrase Chalmers' four-point argument without making a distinction between humans acting "naturally" (whatever that means) and "artificial intelligences":
There is
Most of this assumes that values are independent of intelligence, as Hume argued. But if Hume was wrong and Kant was right, then we will be less able to constrain the values of a superintelligent machine, but the more rational the machine is, the better values it will have.
Are there any LW-rationalist-vetted philosophical papers on this theme in modern times? (I'm somewhat skeptical of the idea that there isn't a universal morality (relative to some generalized Occamian prior-like-thing) that even a paperclip maximizer would converge to (if it was given...
Singularitarian authors will also be pleased that they can now cite a peer-reviewed article by a leading philosopher of mind who takes the Singularity seriously.
Critics will no doubt draw attention to David's previous venture, zombies.
Sure, we think he's wrong, but does academia? That the Singularity is supported by more than one side is good news.
Dualism is a minority position:
http://philpapers.org/surveys/results.pl
Mind: physicalism or non-physicalism?
Accept or lean toward: physicalism 526 / 931 (56.4%)
Accept or lean toward: non-physicalism 252 / 931 (27%)
Other 153 / 931 (16.4%)
Philosophers are used to the fact that they have major disagreements with each other. Even if you think zombie arguments fail, as I do, you'll still perk up your ears when somebody as smart as Chalmers is taking the singularity seriously. I don't accept his version of property dualism, but The Conscious Mind was not written by a dummy.
From a philosopher's viewpoint, Chalmers's work on p-zombies is very respectable. It is exactly the kind of thing that good philosophers do, however mystifying it may seem to a layman.
Nevertheless, to more practical people - particularly those of a materialist, reductionist, monist persuasion, it all looks a little silly. I would say that the question of whether p-zombies are possible is about as important to AI researchers as the question of whether there are non-standard models of set theory is to a working mathematician.
That is, not much. It is a very fundamental and technically difficult matter, but, in the final analysis, the resolution of the question matters a whole lot less than you might have originally thought. Chalmers and Searle may well be right about the possibility of p-zombies, but if they are, it is for narrow technical reasons. And if that has the consequence that you can't completely rule out dualism, well ..., so be it. Whether philosophers can or can not rule something out makes very little difference to me. I'm more interested in whether a model is useful than in whether it has a possibility of being true.
There are a few major problems with any certainty of the singularity. First, we might be too stupid to create a human level ai. Second, it might not possible, for some reason of which we are currently unaware, to create a human level AI. Third, importantly, we could be too smart.
How would that last one work? Maybe we can push technology to the limits ourselves, and no AI can be smart enough to push it further. We don't even begin to have enough knowledge to know if this is likely. In other words, maybe it will all be perfectly comprehensible to the us as o...
David Chalmers is a leading philosopher of mind, and the first to publish a major philosophy journal article on the singularity:
Chalmers, D. (2010). "The Singularity: A Philosophical Analysis." Journal of Consciousness Studies 17:7-65.
Chalmers' article is a "survey" article in that it doesn't cover any arguments in depth, but quickly surveys a large number of positions and arguments in order to give the reader a "lay of the land." (Compare to Philosophy Compass, an entire journal of philosophy survey articles.) Because of this, Chalmers' paper is a remarkably broad and clear introduction to the singularity.
Singularitarian authors will also be pleased that they can now cite a peer-reviewed article by a leading philosopher of mind who takes the singularity seriously.
Below is a CliffsNotes of the paper for those who don't have time to read all 58 pages of it.
The Singularity: Is It Likely?
Chalmers focuses on the "intelligence explosion" kind of singularity, and his first project is to formalize and defend I.J. Good's 1965 argument. Defining AI as being "of human level intelligence," AI+ as AI "of greater than human level" and AI++ as "AI of far greater than human level" (superintelligence), Chalmers updates Good's argument to the following:
By "defeaters," Chalmers means global catastrophes like nuclear war or a major asteroid impact. One way to satisfy premise (1) is to achieve AI through brain emulation (Sandberg & Bostrom, 2008). Against this suggestion, Lucas (1961), Dreyfus (1972), and Penrose (1994) argue that human cognition is not the sort of thing that could be emulated. Chalmers (1995; 1996, chapter 9) has responded to these criticisms at length. Briefly, Chalmers notes that even if the brain is not a rule-following algorithmic symbol system, we can still emulate it if it is mechanical. (Some say the brain is not mechanical, but Chalmers dismisses this as being discordant with the evidence.)
Searle (1980) and Block (1981) argue instead that even if we can emulate the human brain, it doesn't follow that the emulation is intelligent or has a mind. Chalmers says we can set these concerns aside by stipulating that when discussing the singularity, AI need only be measured in terms of behavior. The conclusion that there will be AI++ at least in this sense would still be massively important.
Another consideration in favor of premise (1) is that evolution produced human-level intelligence, so we should be able to build it, too. Perhaps we will even achieve human-level AI by evolving a population of dumber AIs through variation and selection in virtual worlds. We might also achieve human-level AI by direct programming or, more likely, systems of machine learning.
Premise (2) is plausible because AI will probably be produced by an extendible method, and so extending that method will yield AI+. Brain emulation might turn out not to be extendible, but the other methods are. Even if human-level AI is first created by a non-extendible method, this method itself would soon lead to an extendible method, and in turn enable AI+. AI+ could also be achieved by direct brain enhancement.
Premise (3) is the amplification argument from Good: an AI+ would be better than we are at designing intelligent machines, and could thus improve its own intelligence. Having done that, it would be even better at improving its intelligence. And so on, in a rapid explosion of intelligence.
In section 3 of his paper, Chalmers argues that there could be an intelligence explosion without there being such a thing as "general intelligence" that could be measured, but I won't cover that here.
In section 4, Chalmers lists several possible obstacles to the singularity.
Constraining AI
Next, Chalmers considers how we might design an AI+ that helps to create a desirable future and not a horrifying one. If we achieve AI+ by extending the method of human brain emulation, the AI+ will at least begin with something like our values. Directly programming friendly values into an AI+ (Yudkowsky, 2004) might also be feasible, though an AI+ arrived at by evolutionary algorithms is worrying.
Most of this assumes that values are independent of intelligence, as Hume argued. But if Hume was wrong and Kant was right, then we will be less able to constrain the values of a superintelligent machine, but the more rational the machine is, the better values it will have.
Another way to constrain an AI is not internal but external. For example, we could lock it in a virtual world from which it could not escape, and in this way create a leakproof singularity. But there is a problem. For the AI to be of use to us, some information must leak out of the virtual world for us to observe it. But then, the singularity is not leakproof. And if the AI can communicate us, it could reverse-engineer human psychology from within its virtual world and persuade us to let it out of its box - into the internet, for example.
Our Place in a Post-Singularity World
Chalmers says there are four options for us in a post-singularity world: extinction, isolation, inferiority, and integration.
The first option is undesirable. The second option would keep us isolated from the AI, a kind of technological isolationism in which one world is blind to progress in the other. The third option may be infeasible because an AI++ would operate so much faster than us that inferiority is only a blink of time on the way to extinction.
For the fourth option to work, we would need to become superintelligent machines ourselves. One path to this mind be mind uploading, which comes in several varieties and has implications for our notions of consciousness and personal identity that Chalmers discusses but I will not. (Short story: Chalmers prefers gradual uploading, and considers it a form of survival.)
Conclusion
Chalmers concludes:
References
Block (1981). "Psychologism and behaviorism." Philosophical Review 90:5-43.
Chalmers (1995). "Minds, machines, and mathematics." Psyche 2:11-20.
Chalmers (1996). The Conscious Mind. Oxford University Press.
Dreyfus (1972). What Computers Can't Do. Harper & Row.
Lucas (1961). "Minds, machines, and Godel." Philosophy 36:112-27.
Penrose (1994). Shadows of the Mind. Oxford University Press.
Sandberg & Bostrom (2008). "Whole brain emulation: A roadmap." Technical report 2008-3, Future for Humanity Institute, Oxford University.
Searle (1980). "Minds, brains, and programs." Behavioral and Brain Sciences 3:417-57.
Yudkowsky (2004). "Coherent Extrapolated Volition."