Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

David Chalmers' "The Singularity: A Philosophical Analysis"

33 Post author: lukeprog 29 January 2011 02:52AM

David Chalmers is a leading philosopher of mind, and the first to publish a major philosophy journal article on the singularity:

Chalmers, D. (2010). "The Singularity: A Philosophical Analysis." Journal of Consciousness Studies 17:7-65.

Chalmers' article is a "survey" article in that it doesn't cover any arguments in depth, but quickly surveys a large number of positions and arguments in order to give the reader a "lay of the land." (Compare to Philosophy Compass, an entire journal of philosophy survey articles.) Because of this, Chalmers' paper is a remarkably broad and clear introduction to the singularity.

Singularitarian authors will also be pleased that they can now cite a peer-reviewed article by a leading philosopher of mind who takes the singularity seriously.

Below is a CliffsNotes of the paper for those who don't have time to read all 58 pages of it.

 

The Singularity: Is It Likely?

Chalmers focuses on the "intelligence explosion" kind of singularity, and his first project is to formalize and defend I.J. Good's 1965 argument. Defining AI as being "of human level intelligence," AI+ as AI "of greater than human level" and AI++ as "AI of far greater than human level" (superintelligence), Chalmers updates Good's argument to the following:

  1. There will be AI (before long, absent defeaters).
  2. If there is AI, there will be AI+ (soon after, absent defeaters).
  3. If there is AI+, there will be AI++ (soon after, absent defeaters).
  4. Therefore, there will be AI++ (before too long, absent defeaters).

By "defeaters," Chalmers means global catastrophes like nuclear war or a major asteroid impact. One way to satisfy premise (1) is to achieve AI through brain emulation (Sandberg & Bostrom, 2008). Against this suggestion, Lucas (1961), Dreyfus (1972), and Penrose (1994) argue that human cognition is not the sort of thing that could be emulated. Chalmers (1995; 1996, chapter 9) has responded to these criticisms at length. Briefly, Chalmers notes that even if the brain is not a rule-following algorithmic symbol system, we can still emulate it if it is mechanical. (Some say the brain is not mechanical, but Chalmers dismisses this as being discordant with the evidence.)
Searle (1980) and Block (1981) argue instead that even if we can emulate the human brain, it doesn't follow that the emulation is intelligent or has a mind. Chalmers says we can set these concerns aside by stipulating that when discussing the singularity, AI need only be measured in terms of behavior. The conclusion that there will be AI++ at least in this sense would still be massively important.

Another consideration in favor of premise (1) is that evolution produced human-level intelligence, so we should be able to build it, too. Perhaps we will even achieve human-level AI by evolving a population of dumber AIs through variation and selection in virtual worlds. We might also achieve human-level AI by direct programming or, more likely, systems of machine learning.

Premise (2) is plausible because AI will probably be produced by an extendible method, and so extending that method will yield AI+. Brain emulation might turn out not to be extendible, but the other methods are. Even if human-level AI is first created by a non-extendible method, this method itself would soon lead to an extendible method, and in turn enable AI+. AI+ could also be achieved by direct brain enhancement.

Premise (3) is the amplification argument from Good: an AI+ would be better than we are at designing intelligent machines, and could thus improve its own intelligence. Having done that, it would be even better at improving its intelligence. And so on, in a rapid explosion of intelligence.

In section 3 of his paper, Chalmers argues that there could be an intelligence explosion without there being such a thing as "general intelligence" that could be measured, but I won't cover that here.

In section 4, Chalmers lists several possible obstacles to the singularity.

 

Constraining AI

Next, Chalmers considers how we might design an AI+ that helps to create a desirable future and not a horrifying one. If we achieve AI+ by extending the method of human brain emulation, the AI+ will at least begin with something like our values. Directly programming friendly values into an AI+ (Yudkowsky, 2004) might also be feasible, though an AI+ arrived at by evolutionary algorithms is worrying.

Most of this assumes that values are independent of intelligence, as Hume argued. But if Hume was wrong and Kant was right, then we will be less able to constrain the values of a superintelligent machine, but the more rational the machine is, the better values it will have.

Another way to constrain an AI is not internal but external. For example, we could lock it in a virtual world from which it could not escape, and in this way create a leakproof singularity. But there is a problem. For the AI to be of use to us, some information must leak out of the virtual world for us to observe it. But then, the singularity is not leakproof. And if the AI can communicate us, it could reverse-engineer human psychology from within its virtual world and persuade us to let it out of its box - into the internet, for example.

 

Our Place in a Post-Singularity World

Chalmers says there are four options for us in a post-singularity world: extinction, isolation, inferiority, and integration.

The first option is undesirable. The second option would keep us isolated from the AI, a kind of technological isolationism in which one world is blind to progress in the other. The third option may be infeasible because an AI++ would operate so much faster than us that inferiority is only a blink of time on the way to extinction.

For the fourth option to work, we would need to become superintelligent machines ourselves. One path to this mind be mind uploading, which comes in several varieties and has implications for our notions of consciousness and personal identity that Chalmers discusses but I will not. (Short story: Chalmers prefers gradual uploading, and considers it a form of survival.)

 

Conclusion

Chalmers concludes:

Will there be a singularity? I think that it is certainly not out of the question, and that the main obstacles are likely to be obstacles of motivation rather than obstacles of capacity.

How should we negotiate the singularity? Very carefully, by building appropriate values into machines, and by building the first AI and AI+ systems in virtual worlds.

How can we integrate into a post-singularity world? By gradual uploading followed by enhancement if we are still around then, and by reconstructive uploading followed by enhancement if we are not.

 

References

Block (1981). "Psychologism and behaviorism." Philosophical Review 90:5-43.

Chalmers (1995). "Minds, machines, and mathematics." Psyche 2:11-20.

Chalmers (1996). The Conscious Mind. Oxford University Press.

Dreyfus (1972). What Computers Can't Do. Harper & Row.

Lucas (1961). "Minds, machines, and Godel." Philosophy 36:112-27.

Penrose (1994). Shadows of the Mind. Oxford University Press.

Sandberg & Bostrom (2008). "Whole brain emulation: A roadmap." Technical report 2008-3, Future for Humanity Institute, Oxford University.

Searle (1980). "Minds, brains, and programs." Behavioral and Brain Sciences 3:417-57.

Yudkowsky (2004). "Coherent Extrapolated Volition."

Comments (202)

Comment author: JGWeissman 29 January 2011 03:11:42AM 7 points [-]

Chalmer's talk at the Singularity Summit in 2009 presents similar content.

Comment author: NancyLebovitz 29 January 2011 02:59:54PM 6 points [-]

Isolation is trickier than it sounds. If AI is created once, then we can assume that humanity is an AI-creating species. What constraints on tech, action, and/or intelligence would be necessary to guarantee that no one makes an AI in what was supposed to be a safe-for-humans region?

Comment author: lukeprog 29 January 2011 04:12:34PM *  14 points [-]

Right. I'm often asked, "Why not just keep the AI in a box, with no internet connection and no motors with which to move itself?"

Eliezer's experiments with AI-boxing suggest the AI would escape anyway, but there is a stronger reply.

If we've created a superintelligence and put it in a box, that means that others on the planet are just about capable of creating a superintelligence, too. What are you going to do? Ensure that every superintelligence everyone creates is properly boxed? I think not.

Before long, the USA or China or whoever is going to think that their superintelligence is properly constrained and loyal, and release it into the wild in an effort at world domination. You can't just keep boxing AIs forever.

Comment deleted 29 January 2011 04:32:22PM *  [-]
Comment author: Perplexed 29 January 2011 04:52:58PM 0 points [-]

"You just can't keep AIs boxed forever"?

Comment author: nazgulnarsil 29 January 2011 12:28:05PM 3 points [-]

"we can still emulate it if it is mechanical."

right, but how many more orders of magnitude of hardware do we need in this case? this depends on what level of abstraction is sufficient. isn't it the case that if intelligence relies on the base level and has no useful higher level abstractions the amount of computation needed would be absurd (assuming the base level is computable at all)?

Comment author: shokwave 29 January 2011 01:57:09PM 8 points [-]

right, but how many more orders of magnitude of hardware do we need in this case?

Probably a few less. This OB post explains how a good deal of the brain's complexity might be mechanical work to increase signal robustness. Cooled supercomputers with failure rates of 1 in 10^20 (or whatever the actual rate is) won't need to simulate the parts of the brain that error-correct or maintain operation during sneezes or bumps on the head.

Comment author: nazgulnarsil 29 January 2011 05:00:29PM 1 point [-]

good reference but I mean how much more we need if we are forced to simulate at say molecular level rather than simply as a set of signal processors.

Even emulating a single neuron at molecular level is so far beyond us.

Comment author: shokwave 29 January 2011 06:09:48PM 7 points [-]

Well, I don't think we will ever be forced to simulate the brain at a molecular level. That possibility is beyond worst-case; as Chalmers says, it's discordant with the evidence. The brain may not be a algorithmic rule-following signal processor (1), but an individual neuron is a fairly simple analog input/output device.

1: Though I think the evidence from neuroscience quite strongly suggests it is, and if all you've got against it is the feeling of being conscious then you honestly haven't got a leg to stand on

Comment author: nazgulnarsil 29 January 2011 09:31:13PM 1 point [-]

I'm playing devil's advocate in that I don't think the brain will turn out to be anything more than a complex signal processor.

neurons do seem fairly simple, we don't know what's waiting for us when we try to algorithmically model the rest of the brain's structure though.

Comment author: shokwave 30 January 2011 06:12:02AM 2 points [-]

we don't know what's waiting for us when we try to algorithmically model the rest of the brain's structure

Very true. It's not going to be anywhere near as hard as the naysayers claim; but it's definitely much harder than we're capable of now.

Comment author: Johnicholas 31 January 2011 12:38:40PM *  3 points [-]

I think this analysis assumes or emphasizes a false distinction between humans and "AI". For example, Searle's Room is an artificial intelligence built partly out of a human. It is easy to imagine intelligences built strictly out of humans, without paperwork. When humans behave like humans, we naturally form supervening entities (groups, tribes, memes).

I tried to rephrase Chalmers' four-point argument without making a distinction between humans acting "naturally" (whatever that means) and "artificial intelligences":

  1. There is some degree of human intelligence and capabilities. In particular, human intelligence and capabilities has always involved manipulating the world indirectly (mediated by other humans or by nonhuman tools). "There is I"

  2. Since intelligence and capabilities are currently helpful in modifying ourselves and our tools, as we apply our intelligence and capabilities to ourselves and our tools, we will grow in intelligence and capabilities. "If there is I, there will be I+"

  3. If this self-applicability continues for many cycles, we will become very smart and capable. "If there is I+, there will be I++".

  4. Therefore, we will become very smart and very capable. "There will be I++."

I'm not trying to dismiss the dangers involved in this process; all I'm saying is that the language used feeds a Skynet "us versus them" mentality that isn't helpful. Admitting that "We have met the enemy and he is us." focuses attention where it ought to be.

A lot of AI-risks dialogue is a blend of: foolish people focusing on Skynet scenarios, foolish rhetoric (whatever the author is thinking) alluding to Skynet scenarios, and straightforward sensible policies that could and should be separated from the bad science fiction.

This is what I mean by straightforward, sensible, non-sf policies: We have always made mistakes when using tools. Software tools allow us to make more mistakes faster, especially "unintended consequences" mistakes. We should put effort into developing more safety techniques guarding against unintended consequences of our software tools.

Comment author: shokwave 31 January 2011 12:56:08PM 2 points [-]

Sci-fi policies can't be good policies?

Comment author: Leonhart 31 January 2011 01:24:07PM 2 points [-]

What mentality other than "us versus them" would be even remotely helpful for dealing with a UFAI?

We have met the enemy and we are paperclips.

Comment author: shokwave 31 January 2011 02:19:25PM *  1 point [-]

"Us versus them" presupposes the existence of them, ie UFAI. Which means we have probably already lost. So really, no mentality would be remotely helpful for dealing with an existing UFAI.

Comment author: Will_Newsome 29 January 2011 03:05:48AM *  3 points [-]

Most of this assumes that values are independent of intelligence, as Hume argued. But if Hume was wrong and Kant was right, then we will be less able to constrain the values of a superintelligent machine, but the more rational the machine is, the better values it will have.

Are there any LW-rationalist-vetted philosophical papers on this theme in modern times? (I'm somewhat skeptical of the idea that there isn't a universal morality (relative to some generalized Occamian prior-like-thing) that even a paperclip maximizer would converge to (if it was given the right decision theoretic (not necessarily moral per se) tools for philosophical reasoning, which is by no means guaranteed, so we should of course still be careful when designing AGIs).)

Comment author: JGWeissman 29 January 2011 03:16:18AM 13 points [-]

How would converging to a "universal morality" help produce paperclips?

Comment author: Perplexed 29 January 2011 03:38:45PM *  3 points [-]

Are there any LW-rationalist-vetted philosophical papers on this theme in modern times?

I'm not sure what is required for a philosophical paper to be deemed "LW-rationalist-vetted", nor am I sure why that is a desirable feature for a paper to have. But I will state that, IMHO, an approach based on "naturalistic ethics", like that of Binmore is at least as rational as any ethical approach based on some kind of utilitarianism.

I would say that a naturalistic approach to ethics assumes, with Hume, that fundamental values are not universal - they may certainly vary by species, for example, and also by the historical accidents of genetics, birth-culture, etc. However, meta-ethics is rationally based and universal, and can be converged upon by a process of reflective equilibrium.

As to instrumental values - those turn out to be universal in the sense that (in the limit of perfect rationality and low-cost communication) they will be the same for everyone in the ethical community at a given time. However, they will not be universal in the sense that they will be the same for all conceivable communities in the multiverse. Instrumental values will depend on the makeup of the community, because the common community values are derived as a kind of compromise among the idiosyncratic fundamental values of the community members. Instrumental values will also depend upon the community's beliefs - regarding expected consequences of actions, expected utilities of outcomes, and even regarding the expected future composition of the community. And, since the community learns (i.e. changes its beliefs), instrumental values must inevitably change a little with time.

I'm somewhat skeptical of the idea that there isn't a universal morality (relative to some generalized Occamian prior-like-thing) that even a paperclip maximizer would converge to ...

As an intuition pump, I'll claim that Clippy could fit right in to a community of mostly human rationalists, all in agreement on the naturalist meta-ethics. In that community, Clippy would act in accordance with the community's instrumental values (which will include both the manufacture of paperclips and other, more idiosyncratically human values). Clippy will know that more paper clips are produced by the community than Clippy could produce on his own if he were not a community member. And the community welcomes Clippy, because he contributes to the satisfaction of the fundamental values of other community members - through his command of metallurgy and mechanical engineering, for example.

The aspect of naturalistic ethics which many people find distasteful is that the community will contribute to the satisfaction of your fundamental values only to the extent that you contribute to the satisfaction of the fundamental values of other community members. So, the fundamental values of the weak and powerless tend to get less weight in the collective instrumental value system than do the fundamental values of the strong and powerful. Of course, this does not mean that the very young and the elderly get mistreated - it is rational to contribute now to those who have contributed in the past or who will contribute in the future. And many humans will include concern for the weak among their fundamental values - so the community will have to respect those values.

Comment author: Jack 29 January 2011 04:56:51AM 6 points [-]

Since it keeps coming up I think I'll write a top level post on the subject- I'll probably do some research when writing so I'll see what has been written recently. Hopefully I'll publish in the next week or two.

Comment author: wedrifid 29 January 2011 03:35:32PM 4 points [-]

But, but... paperclips. Its morality is 'make more flipping paperclips'! Just that. With the right decision theoretic tools for philosophical reasoning it will make even more paper-clips. If that even qualifies as 'morality' then that is what a paperclip maximiser has.

Comment author: ArisKatsaris 29 January 2011 03:59:49PM *  3 points [-]

Look, I personally don't believe that all or even most moralities will converge, however... imagine something like the following:

Dear paperclipper,

There's a limited amount of matter that's reachable by you in the known universe for any given timespan. Moreover, your efforts to paperclip the universe will be opposed both by humans and other alien civilizations which will perceive them as hostile and dangerous. Even if you're ultimately victorious, which is far from certain, you're better off cooperating with humans peacefully, postponing slightly your plans to make paperclips (which you'd have to postpone anyway in order to create weaponry to defeat humanity), and instead working with humans to create a feasible way to construct a new universe where you will hence possess and wherein your desire to create an infinite amount of paperclips will be satisfied without opposition.

Sincerely, humanity.


So, from the intrinsic "I want to create as many paperclips as possible" the truly intelligent AI can reasonably discover the instrumental "I'd like to not be opposed to my creation of such paperclips" to "I'd like to create my paperclips in a way that they won't harm others, so that they won't have a reason for me to oppose me" to "I'd like to transport myself to an uninhabited universe of my own creation, to make paperclips without any opposition at all".

This is probably wishful thinking, but the situation isn't as simple as what you describe either.

Comment author: DanArmak 29 January 2011 04:17:08PM 6 points [-]

If the paperclipper happens to be the first AI++, and arrives before humanity goes interstellar, then it can probably wipe out all humanity quite quickly without reasoning with it. And if can do that it definitely will - no point in compromising when you've got the upper hand.

Comment author: wedrifid 29 January 2011 04:34:17PM 5 points [-]

no point in compromising when you've got the upper hand.

Well, at least not when the lower hand is more use disassembled to build more cosmic commons burning spore ships.

Comment author: wedrifid 29 January 2011 04:31:48PM *  2 points [-]

Wanting to maximise paperclips (obviously?) does not preclude cooperation in order to produce paperclips. We haven't redefined 'morality' to include any game theoretic scenarios in which cooperation is reached, have we? (I suppose we could do something along those lines in the theism thread.)

Comment author: TheOtherDave 29 January 2011 04:13:40PM 2 points [-]

Agreed that this is probably wishful thinking.

But, yes, also agreed that a sufficiently intelligent and well-informed paperclipper will work out that diplomacy, including consistent lying about its motives, is a good tactic to use for as long as it doesn't completely overpower its potential enemies.

Comment author: timtyler 29 January 2011 01:28:40PM *  1 point [-]

I'm somewhat skeptical of the idea that there isn't a universal morality (relative to some generalized Occamian prior-like-thing) that even a paperclip maximizer would converge to (if it was given the right decision theoretic (not necessarily moral per se) tools for philosophical reasoning, which is by no means guaranteed, so we should of course still be careful when designing AGIs).

There's goal system zero / God's utility function / Universal Instrumental Values.

Comment author: shokwave 29 January 2011 02:03:28PM 4 points [-]

I'm somewhat skeptical of the idea that there isn't a universal morality that even a paperclip maximizer would converge to

You mean you're somewhat convinced that there is a universal morality (that even a paperclip maximizer would converge to)? That sounds like a much less tenable position. I mean,

There's goal system zero / God's utility function / Universal Instrumental Values.

A statement like this needs some support.

Comment author: timtyler 29 January 2011 04:20:04PM *  4 points [-]

I've linkified the grandparent a bit - for those not familiar with the ideas.

The main idea is that many agents which are serious about attaining their long term goals will first take control of large quantities of spactime and resources - before they do very much else - to avoid low-utility fates like getting eaten by aliens.

Such goals represent something like an attractor in ethics-space. You could avoid the behaviour associated with the attractor by using discounting, or by adding constraints - at the expense of making the long-term goal less likely to be attained.

Comment author: Perplexed 31 January 2011 06:40:26AM 2 points [-]

Thx for this. I found those links and the idea itself fascinating. Does anyone know if Roko or Hollerith developed the idea much further?

One is reminded of the famous quote from 1984: O'Brien to Winston: "Power is not a means. Power is the end." But it certainly makes sense, that as an agent becomes better integrated into a coalition or community, and his day-to-day goals become more weighted toward the terminal values of other people and less weighted toward his own terminal values, that an agent might be led to rewrite his own utility function toward Power - instrumental power to achieve any goal makes sense as a synthetic terminal value.

After all, most of our instinctual terminal values - sexual pleasure, food, good health, social status, the joy of victory and the agony of defeat - were originally instrumental values from the standpoint of their 'author': natural selection.

Comment author: timtyler 31 January 2011 09:30:12PM *  3 points [-]

Does anyone know if Roko or Hollerith developed the idea much further?

Roko combined the conccept with the (rather less sensible) idea of promoting those instrumental values into terminal values - and was met with a chorus of "Unfriendly AI".

Hollerith produced several pages on the topic.

Probably the best-known continuation is via Omohundro.

"Universal Instrumental Values" is much the same idea as "Basic AI drives" dressed up a little differently:

Comment author: Perplexed 31 January 2011 09:45:36PM 1 point [-]

"Universal Instrumental Values" is much the same idea as "Basic AI drives" dressed up a little differently

You are right. I hadn't made that connection. Now I have a little more respect for Omohundro's work.

Comment author: timtyler 31 January 2011 10:40:03PM *  0 points [-]

I was a little bit concerned about your initial Omohundro reaction.

Omohundro's material is mostly fine and interesting. It's a bit of a shame that there isn't more maths - but it is a difficult area where it is tricky to prove things. Plus, IMO, he has the occasional zany idea that takes your brain to interesting places it didn't dream of before.

I maintain some Omohundro links here.

Comment author: jacob_cannell 31 January 2011 09:47:46PM 0 points [-]

As a side point, you could also re-read "Basic AI drives" as "Basic Replicator Drives" - it's systemic evolution.

Comment author: jacob_cannell 31 January 2011 09:53:03PM *  0 points [-]

Interesting, hadn't seen Hollerith's posts before. I came to a similar conclusion about AIXI's behavior as exemplifying a final attractor in intelligent systems with long planning horizons.

If the horizon is long enough (infinite), the single behavioral attractor is maximizing computational power and applying it towards extensive universal simulation/prediction.

This relates to simulism and the SA, as any superintelligences/gods can thus be expected to create many simulated universes, regardless of their final goal evaluation criteria.

In fact, perhaps the final goal criteria applies to creating new universes with the desired properties.

Comment author: shokwave 29 January 2011 06:15:50PM 2 points [-]

These sound instrumental; you take control of the universe in order to achieve your terminal goals. That seems slightly different from what Newsome was talking about, which was more a converging of terminal goals on one superterminal goal.

Comment author: timtyler 29 January 2011 06:20:54PM *  1 point [-]

Thus one the proposed titles: "Universal Instrumental Values".

Newsome didn't distinguish between instrumental and terminal values.

Comment author: Vladimir_Nesov 29 January 2011 02:39:59PM 1 point [-]

You mean you're somewhat convinced that there is a universal morality (that even a paperclip maximizer would converge to)? That sounds like a much less tenable position.

Those were Newsome's words.

Comment author: shokwave 29 January 2011 06:09:28PM 1 point [-]

Ah. I misunderstood the quoting.

Comment author: Vladimir_Nesov 29 January 2011 01:39:32PM *  -1 points [-]

Boo!

(To make a point as well-argued as the one it replies to.)

Edit: Now that the above comment was edited to include citations, my joke stopped being funny and got downvoted.

Comment author: jacob_cannell 29 January 2011 05:09:55AM *  0 points [-]

Any universal morality has to have long term fitness - ie it must somehow win at the end of time.

Otherwise, aliens may have a more universal morality.

EDIT: why the downvote?

Comment author: endoself 29 January 2011 05:33:22AM *  1 point [-]

This does not require as much optimization as it sounds. As Wei Dai points out, computing power is proportional to the square amount of mass obtained as long as that mass can be physically collected together, so a civilization collecting mass probably gets more observers than one spreading out and colonizing mass, depending on the specifics of cosmology. This kind of civilization is much easier to control centrally, so a wide range of values have the potential to dominate, depending on which ones happen to come into being.

Comment author: jacob_cannell 29 January 2011 11:28:49PM 2 points [-]

I'm not sure where he got the math that available energy is proportional to the square of the mass. Wouldn't this come from the mass-energy equivalence and thus be mc^2?

Wei Dai's conjecture about black holes being useful as improved entropy dumps is interesting. Black holes or similar dense entities also maximize speed potential and interconnect efficiency, but they are poor as information storage.

It's also possible that by the time a civilization reaches this point of development, it figures out how to do something more interesting such as create new physical universes. John Smart has some interesting speculation on that and how singularity civilizations may eventually compete/cooperate.

I still have issues wrapping my head around the time dilation.

Comment author: endoself 30 January 2011 05:07:30PM 3 points [-]

Energy is proportional to mass. Computing ability is proportional to (max entropy - current entropy), and max entropy is proportional to the square of mass. That was the whole point of his argument.

Comment author: LucasSloan 29 January 2011 09:55:12PM 1 point [-]

Is this an argument based on the idea that there is some way for all of math to look such that everyone gets as much of what they want as possible?

Comment author: timtyler 29 January 2011 11:39:35AM 2 points [-]

Singularitarian authors will also be pleased that they can now cite a peer-reviewed article by a leading philosopher of mind who takes the Singularity seriously.

Critics will no doubt draw attention to David's previous venture, zombies.

Comment author: shokwave 29 January 2011 02:04:37PM 9 points [-]

Sure, we think he's wrong, but does academia? That the Singularity is supported by more than one side is good news.

Comment author: CarlShulman 29 January 2011 03:43:35PM *  7 points [-]

Dualism is a minority position:

http://philpapers.org/surveys/results.pl

Mind: physicalism or non-physicalism?

Accept or lean toward: physicalism 526 / 931 (56.4%)

Accept or lean toward: non-physicalism 252 / 931 (27%)

Other 153 / 931 (16.4%)

Comment author: lukeprog 29 January 2011 04:14:41PM *  10 points [-]

Philosophers are used to the fact that they have major disagreements with each other. Even if you think zombie arguments fail, as I do, you'll still perk up your ears when somebody as smart as Chalmers is taking the singularity seriously. I don't accept his version of property dualism, but The Conscious Mind was not written by a dummy.

Comment author: CarlShulman 29 January 2011 06:54:17PM *  3 points [-]

I didn't mean to say that Chalmers isn't a highly respected philosopher, but I also think it's true that the impact is somewhat blunted relative to a counterfactual in which his philosophy of mind work was of equal fame and quality, but arguing a different position.

Comment author: torekp 30 January 2011 08:29:48PM 0 points [-]

I disagree; the fact that Chalmers is critical of standard varieties of physicalism will make him more credible on the Singularity. In the former case, he rejects the nerd-core view. That makes him a little harder to write off.

Comment author: Perplexed 29 January 2011 07:27:59PM 7 points [-]

From a philosopher's viewpoint, Chalmers's work on p-zombies is very respectable. It is exactly the kind of thing that good philosophers do, however mystifying it may seem to a layman.

Nevertheless, to more practical people - particularly those of a materialist, reductionist, monist persuasion, it all looks a little silly. I would say that the question of whether p-zombies are possible is about as important to AI researchers as the question of whether there are non-standard models of set theory is to a working mathematician.

That is, not much. It is a very fundamental and technically difficult matter, but, in the final analysis, the resolution of the question matters a whole lot less than you might have originally thought. Chalmers and Searle may well be right about the possibility of p-zombies, but if they are, it is for narrow technical reasons. And if that has the consequence that you can't completely rule out dualism, well ..., so be it. Whether philosophers can or can not rule something out makes very little difference to me. I'm more interested in whether a model is useful than in whether it has a possibility of being true.

Comment author: JoshuaZ 29 January 2011 08:27:55PM 3 points [-]

Nevertheless, to more practical people - particularly those of a materialist, reductionist, monist persuasion, it all looks a little silly. I would say that the question of whether p-zombies are possible is about as important to AI researchers as the question of whether there are non-standard models of set theory is to a working mathematician.

What precisely do you mean by non-standard set theory?. If you mean modifying the axioms of ZFC, then a lot of mathematicians pay attention. There are a lot for example who try to minimize dependence on the axiom of choice. And whether one accepts choice has substantial implications for topology (see for example this survey). Similarly, there are mathematicians who investigate what happens when you assume the continuum hypothesis or a generalized version or some generalized negation.

If one is talking about large cardinal axioms then note that their are results in a variety of fields including combinatorics that can be shown to be true given some strong large cardinal axioms. (I don't know the details of such results, only their existence).

Finally, if one looks at issues of Foundation or various forms of Anti-Foundation, there's been work (comparatively recently, primarily in the last 30 years) (see this monograph) and versions of anti-foundation have been useful in logic, machine learning, complex systems, and other fields. While most of the early work was done by Peter Aczel, others have done follow-up work.

What axioms of set theory one is using can be important, and thinking about alternative models of set theory can lead to practical results.

Comment author: Perplexed 29 January 2011 09:04:10PM *  4 points [-]

What precisely do you mean by non-standard set theory?

I didn't say "non-standard set theory". I said "non-standard models of set theory".

I originally considered using "non-standard models of arithmetic" as my example of a fundamental, but unimportant question, but rejected it because the question is just too simple. Asking about non-standard models of set theory (models of ZFC, for example) is more comparable to the zombie question precisely because the question itself is less well defined. For example, just what do we mean in talking about a 'model' of ZFC, when ZFC or something similar is exactly the raw material used to construct models in other fields?

What axioms of set theory one is using can be important, and thinking about alternative models of set theory can lead to practical results.

Oh, I agree that some (many?) mathematicians will read Aczel (I didn't realize the book was available online. Thx) and Barwise on AFA, and that even amateurs like me sometimes read Nelson, Steele, or Woodin. Just as AI researchers sometimes read Chalmers.

My point is that the zombie question may be interesting to an AI researcher, just as inaccessible cardinals or non-well-founded sets are interesting to an applied mathematician. But they are not particularly useful to most of the people who find them interesting. Most of the applications that Barwise suggests for Aczel's work can be modeled with just a little more effort in standard ZF or ZFC. And I just can't imagine that an AI researcher will learn anything from the p-zombie debate which will tell him which features or mechanisms his AI must have so as to avoid the curse of zombiedom.

Comment author: Vladimir_Nesov 29 January 2011 10:19:02PM 1 point [-]

For example, just what do we mean in talking about a 'model' of ZFC, when ZFC or something similar is exactly the raw material used to construct models in other fields?

Learning to distinguish different levels of formalism by training to follow mathematical arguments from formal set theory can help you lots in disentangling conceptual hurdles in decision theory (in its capacity as foundational study of goal-aware AI). It's not a historical accident I included these kinds of math in my reading list on FAI.

Comment author: Perplexed 29 January 2011 11:32:39PM 0 points [-]

Hmmm. JoshuaZ made a similar point. Even though the subject matter and the math itself may not be directly applicable to the problems we are interested in, the study of that subject matter can be useful by providing exercise in careful and rigorous thinking, analogies, conceptual structures, and 'tricks' that may well be applicable to the problems we are interested in.

I can agree with that. At least regarding the topics in mathematical logic we have been discussing. I am less convinced of the usefulness of studying the philosophy of mind. That branch of philosophy still strikes me as just a bunch of guys stumbling around in the dark.

Comment author: Vladimir_Nesov 29 January 2011 11:51:02PM *  0 points [-]

I am less convinced of the usefulness of studying the philosophy of mind. That branch of philosophy still strikes me as just a bunch of guys stumbling around in the dark.

And I agree. The way Eliezer refers to p-zombie arguments is to draw attention to a particular error in reasoning, an important error one should learn to correct.

Comment author: JoshuaZ 29 January 2011 09:51:45PM *  1 point [-]

Asking about non-standard models of ZFC is deeply connected to asking about ZFC with other axioms added. This is connected to the Löwenheim–Skolem theorem and related results. Note for example that if there is some large cardinal axiom L and statement S such that ZFC + L can model ZFC + S, and L is independent of ZFC, then ZFC + S is consistent if ZFC is.

For example, just what do we mean in talking about a 'model' of ZFC, when ZFC or something similar is exactly the raw material used to construct models in other fields?

We can make this precise by talking about any given set theory as your ground and then discussing the models in it. This is connected to Paul Cohen's work in forcing but I don't know anything about it in any detail. The upshot though is that we can talk about models in helpful ways.

But they are not particularly useful to most of the people who find them interesting. Most of the applications that Barwise suggests for Aczel's work can be modeled with just a little more effort in standard ZF or ZFC. And I just can't imagine that an AI researcher will learn anything from the p-zombie debate which will tell him which features or mechanisms his AI must have so as to avoid the curse of zombiedom.

Not much disagreement there, but I think you might underestimate the helpfulness of thinking about different base axioms rather than talking about things in ZFC. In any event, the objection is not to your characterization of thinking about p-zombie but rather the analogy. The central point you are making seems correct to me.

Comment author: timtyler 29 January 2011 08:03:08PM *  3 points [-]

Nevertheless, to more practical people - particularly those of a materialist, reductionist, monist persuasion, it all looks a little silly.

Frankly, I haven't even bothered looking very much at this material. My attitude is more in line with the philosophy of the Turing test. If it looks like a duck and quacks like a duck...

Hofstadter has a good "zombie takedown" - in "I am a Strange Loop, Chapter 22: A Tango with Zombies and Dualism".

Comment author: RobbBB 13 January 2013 05:29:44PM *  2 points [-]

No, I don't think so. The possibility of p-zombies is very important for FAI, because if zombies are possible it seems likely that an FAI could never tell sentient beings apart from non-sentient ones. And if our values all center around promoting positive experiential states for sentient beings, and we are indifferent to the 'welfare' of insentient ones, then a failure to resolve the Hard Problem places a serious constraint on our ability to create a being that can accurately identify the things we value in practice, or on our own ability to determine which AIs or 'uploaded minds' are loci of value (i.e., are sentient).

Comment author: DSimon 29 January 2011 07:54:06PM 0 points [-]

I think tim's point was that Chalmers' work on p-zombies resulted in some untenable conclusions.

Comment author: XiXiDu 29 January 2011 02:18:13PM 0 points [-]

Critics will no doubt draw attention to David's previous venture, zombies.

More here.

Comment author: deepthoughtlife 02 February 2011 07:36:46AM 0 points [-]

There are a few major problems with any certainty of the singularity. First, we might be too stupid to create a human level ai. Second, it might not possible, for some reason of which we are currently unaware, to create a human level AI. Third, importantly, we could be too smart.

How would that last one work? Maybe we can push technology to the limits ourselves, and no AI can be smart enough to push it further. We don't even begin to have enough knowledge to know if this is likely. In other words, maybe it will all be perfectly comprehensible to the us as of now, and therefore not a singularity at all.

Is it worth considering? Of course. Is it worth pursuing? Probably, (we need to wait for hindsight to know better than that), particularly since it will matter a great deal if and when it occurs. We simply can't assume that it will.

Johnicholas made a good comment I think on the point. What we have (and are) doing is very reminiscent of what Chalmers claims will lead to the singularity. I would go so far as to say that we are a singularity of sorts, beyond which the face of the world could never be the same. Our last century especially, as we went from what would, by analogy, be from the iron age to the beginning of the renaissance, or even further. Cars, Relativity, Quantum Mechanics, planes, radar, microwaves,two world wars, nukes, collapse of colonial system, interstates, computers, massive cold war, countless conflicts and atrocities, entry to and study of space, the internet, and that is just a brief survey, off the top of my head. We've had so many, that I'm not sure superhuman AI would be all that difficult to accept, so long as it was super morally speaking as well -which is, of course, not a given.

Any true AI that could not, with 100% accuracy be called friendly, should not exist.