matt1 comments on Rationality Quotes: April 2011 - Less Wrong

6 Post author: benelliott 04 April 2011 09:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (384)

You are viewing a single comment's thread. Show more comments above.

Comment author: matt1 05 April 2011 06:31:38PM *  -2 points [-]

Of course, my original comment had nothing to do with god. It had to do with "souls", for lack of a better term as that was the term that was used in the original discussion (suggest reading the original post if you want to know more---basically, as I understand the intent it simply referred to some hypothetical quality that is associated with consciousness that lies outside the realm of what is simulable on a Turing machine). If you think that humans are nothing but Turing machines, why is it morally wrong to kill a person but not morally wrong to turn off a computer? Please give a real answer...either provide an answer that admits that humans cannot be simulated by Turing machines, or else give your answer using only concepts relevant to Turing machines (don't talk about consciousness, qualia, hopes, whatever, unless you can precisely quantify those concepts in the language of Turing machines). And in the second case, your answer should allow me to determine where the moral balance between human and computers lies....would it be morally bad to turn off a primitive AI, for example, with intelligence at the level of a mouse?

Comment author: [deleted] 05 April 2011 07:18:41PM 68 points [-]

If you think that humans are nothing but Turing machines, why is it morally wrong to kill a person but not morally wrong to turn off a computer?

Your question has the form:

If A is nothing but B, then why is it X to do Y to A but not to do Y to C which is also nothing but B?

This following question also has this form:

If apple pie is nothing but atoms, why is it safe to eat apple pie but not to eat napalm which is also nothing but atoms?

And here's the general answer to that question: the molecules which make up apple pie are safe to eat, and the molecules which make up napalm are unsafe to eat. This is possible because these are not the same molecules.

Now let's turn to your own question and give a general answer to it: it is morally wrong to shut off the program which makes up a human, but not morally wrong to shut off the programs which are found in an actual computer today. This is possible because these are not the same programs.

At this point I'm sure you will want to ask: what is so special about the program which makes up a human, that it would be morally wrong to shut off the program? And I have no answer for that. Similarly, I couldn't answer you if you asked me why the molecules of apple pie are safe to eat and the those of napalm are not.

As it happens, chemistry and biology have probably advanced to the point at which the question about apple pie can be answered. However, the study of mind/brain is still in its infancy, and as far as I know, we have not advanced to the equivalent point. But this doesn't mean that there isn't an answer.

Comment author: NickiH 05 April 2011 08:10:20PM 16 points [-]

what is so special about the program which makes up a human, that it would be morally wrong to shut off the program?

We haven't figured out how to turn it back on again. Once we do, maybe it will become morally ok to turn people off.

Comment author: NancyLebovitz 06 April 2011 11:34:22AM 5 points [-]

Because people are really annoying, but we need to be able to live with each other.

We need strong inhibitions against killing each other-- there are exceptions (self-defense, war), but it's a big win if we can pretty much trust each other not to be deadly.

We'd be a lot more cautious about turning off computers if they could turn us off in response.

None of this is to deny that turning off a computer is temporary and turning off a human isn't. Note that people are more inhibited about destroying computers (though much less so than about killing people) than they are about turning computers off.

Comment author: Laoch 05 April 2011 11:11:28PM 4 points [-]

Doesn't general anesthetic count? I thought that was the turning off of the brain. I was completely "out" when I had it administered to me.

Comment author: Kevin723 09 April 2011 05:01:14PM 4 points [-]

if i believed when i turned off my computer it would need to be monitered by a specialist or it might not ever come back on again, i would be hesitant to turn it off as well

Comment author: gwern 09 April 2011 06:09:02PM 2 points [-]

And indeed, mainframes & supercomputers are famous for never shutting down or doing so on timespans measured in decades and with intense supervision on the rare occasion that they do.

Comment author: Desrtopa 05 April 2011 11:17:56PM *  4 points [-]

It certainly doesn't put a halt to brain activity. You might not be aware of anything that's going on while you're under, or remember anything afterwards (although some people do,) but that doesn't mean that your brain isn't doing anything. If you put someone under general anesthetic under an electroencephalogram, you'd register plenty of activity.

Comment author: Laoch 06 April 2011 08:24:54AM 1 point [-]

Ah yes, didn't think of that. Even while I'm concious my brain is doing things I'm/it's not aware of.

Comment author: JohannesDahlstrom 07 April 2011 05:58:43PM 5 points [-]

Some deep hypothermia patients, however, have been successfully revived from a prolonged state of practically no brain activity whatsoever.

Comment author: Kevin723 09 April 2011 04:59:36PM 0 points [-]

as is your computer when its turned off

Comment author: David_Gerard 05 April 2011 11:14:56PM 0 points [-]

And people don't worry about that because it's one people are used to the idea of coming back from, which fits the expressed theory.

Comment author: KrisC 06 April 2011 06:43:48AM 3 points [-]

what is so special about the program which makes up a human, that it would be morally wrong to shut off the program?

Is it sufficient to say that humans are able to consider the question? That humans possess an ability to abstract patterns from experience so as to predict upcoming events, and that exercise of this ability leads to a concept of self as a future agent.

Is it necessary that this model of identity incorporate relationships with peers? I think so but am not sure. Perhaps it is only necessary that the ability to abstract be recursive.

Comment author: sark 05 April 2011 09:44:29PM 4 points [-]

Hmm, I don't happen to find your argument very convincing. I mean, what it does is to pay attention to some aspect of the original mistaken statement, then find another instance sharing that aspect which is transparently ridiculous.

But is this sufficient? You can model the statement "apples and oranges are good fruits" in predicate logic as "for all x, Apple(x) or Orange(x) implies Good(x)" or in propositional logic as "A and O" or even just "Z". But it should really depend on what aspect of the original statement you want to get at. You want a model which captures precisely those aspects you want to work with.

So your various variables actually confused the hell outta me there. I was trying to match them up with the original statement and your reductio example. All the while not really understanding which was relevant to the confusion. It wasn't a pleasant experience :(

It seems to me much simpler to simply answer: "Turing machine-ness has no bearing on moral worth". This I think gets straight to the heart of the matter, and isolates clearly the confusion in the original statement.

Or further guess at the source of the confusion, the person was trying to think along the lines of: "Turing machines, hmm, they look like machines to me, so all Turing machines are just machines, like a sewing machine, or my watch. Hmm, so humans are Turing machines, but by my previous reasoning this implies humans are machines. And hmm, furthermore, machines don't have moral worth... So humans don't have moral worth! OH NOES!!!"

Your argument seems like one of those long math proofs which I can follow step by step but cannot grasp its overall structure or strategy. Needless to say, such proofs aren't usually very intuitively convincing.

(but I could be generalizing from one example here)

Comment author: matt1 05 April 2011 10:06:51PM *  -1 points [-]

No, I was not trying to think along those lines. I must say, I worried in advance that discussing philosophy with people here would be fruitless, but I was lured over by a link, and it seems worse than I feared. In case it isn't clear, I'm perfectly aware what a Turing machine is; incidentally, while I'm not a computer scientist, I am a professional mathematical physicist with a strong interest in computation, so I'm not sitting around saying "OH NOES" while being ignorant of the terms I'm using. I'm trying to highlight one aspect of an issue that appears in many cases: if consciousness (meaning whatever we mean when we say that humans have consciousness) is possible for Turing machines, what are the implications if we do any of the obvious things? (replaying, turning off, etc...) I haven't yet seen any reasonable answer, other than 1) this is too hard for us to work out, but someday perhaps we will understand it (the original answer, and I think a good one in its acknowledgment of ignorance, always a valid answer and a good guide that someone might have thought about things) and 2) some pointless and wrong mocking (your answer, and I think a bad one). edit to add: forgot, of course, to put my current guess as to most likely answer, 3) that consciousness isn't possible for Turing machines.

Comment author: pjeby 06 April 2011 12:04:48AM *  8 points [-]

if consciousness (meaning whatever we mean when we say that humans have consciousness) is possible for Turing machines,

This is the part where you're going astray, actually. We have no reason to think that human beings are NOT Turing-computable. In other words, human beings almost certainly are Turing machines.

Therefore, consciousness -- whatever we mean when we say that -- is indeed possible for Turing machines.

To refute this proposition, you'd need to present evidence of a human being performing an operation that can't be done by a Turing machine.

Understanding this will help "dissolve" or "un-ask" your question, by removing the incorrect premise (that humans are not Turing machines) that leads you to ask your question.

That is, if you already know that humans are a subset of Turing machines, then it makes no sense to ask what morally justifies treating them differently than the superset, or to try to use this question as a way to justify taking them out of the larger set.

IOW, (the set of humans) is a subset of (the set of turing machines implementing consciousness), which in turn is a proper subset of (the set of turing machines). Obviously, there's a moral issue where the first two subsets are concerned, but not for (the set of turing machines not implementing consciousness).

In addition, there may be some issues as to when and how you're doing the turning off, whether they'll be turned back on, whether consent is involved, etc... but the larger set of "turing machines" is obviously not relevant.

I hope that you actually wanted an answer to your question; if so, this is it.

(In the event you wish to argue for another answer being likely, you'll need to start with some hard evidence that human behavior is NOT being Turing-computable... and that is a tough road to climb. Essentially, you're going to end up in zombie country.)

Comment author: ArisKatsaris 06 April 2011 12:48:55AM 0 points [-]

To refute this proposition, you'd need to present evidence of a human being performing an operation that can't be done by a Turing machine.

That's quite easy: I can lift a rock, a Turing machine can't. A Turing machine can only manipulate symbols on a strip of tape, it can't do anything else that's physical.

Your claim that consciousness (whatever we mean when we say that) is possible for Turing machines, rests on the assumption that consciousness is about computation alone, not about computation+some unidentified physical reaction that's absent to pure Turing machines resting in a box on a table.

That consciousness is about computation alone may indeed end up true, but it's as yet unproven.

Comment author: AlephNeil 06 April 2011 07:26:04PM 7 points [-]

That's quite easy: I can lift a rock, a Turing machine can't.

That sounds like a parody of bad anti-computationalist arguments. To see what's wrong with it, consider the response: "Actually you can't lift a rock either! All you can do is send signals down your spinal column."

That consciousness is about computation alone may indeed end up true, but it's as yet unproven.

What sort of evidence would persuade you one way or the other?

Comment author: Vladimir_Nesov 06 April 2011 09:18:12PM 2 points [-]

Read the first part of ch.2 of "Good and Real".

Comment author: Perplexed 07 April 2011 03:25:35PM 2 points [-]

Could you clarify why you think that this reading assignment illuminates the question being discussed? I just reread it. For the most part, it is an argument against dualism. It argues that consciousness is (almost certainly) reducible to a physical process.

But this doesn't have anything to do with what ArisKatsaris wrote. He was questioning whether consciousness can be reduced to a purely computational process (without "some unidentified physical reaction that's absent to pure Turing machines".)

Consider the following argument sketch:

  1. Consciousness can be reduced to a physical process.
  2. Any physical process can be abstracted as a computation.
  3. Any computation can be modeled as a Turing Machine computation.
  4. Therefore, consciousness can be produced on a TM.

Each step above is at least somewhat problematic. Matt1 seemed to be arguing against step 1, and Drescher does respond to that. But ArisKatsaris seemed to be arguing against step 2. My choice would be to expand the definition of 'computation' slightly to include the interactive, asynchronous, and analog, so that I accept step 2 but deny step 3. Over the past decade, Wegner and Goldin have published many papers arguing that computation != TM.

It may well be that you can only get consciousness if you have a non-TM computation (mind) embedded in a system of sensors and actuators (body) which itself interacts with and is embedded in within a (simulated?) real-time environment. That is, when you abstract the real-time interaction away, leaving only a TM computation, you have abstracted away an essential ingredient of consciousness.

Comment author: Vladimir_Nesov 07 April 2011 04:17:39PM *  1 point [-]

For the most part, it is an argument against dualism. It argues that consciousness is (almost certainly) reducible to a physical process.

It actually sketches what consciousness is and how it works, from which you can see how we could implement something like that as an abstract algorithm.

The value of that description is not so much in reaching a certain conclusion, but in reaching a sense of what exactly are we talking about and consequently why the question of whether "we can implement consciousness as an abstract algorithm" is uninteresting, since at that point you know more about the phenomenon than the words forming the question can access (similarly to how the question of whether crocodile is a reptile is uninteresting, once you know everything you need about crocodiles).

The problem here, I think, is that "consciousness" doesn't get unpacked, and so most of the argument is on the level of connotations. The value of understanding the actual details behind the word, even if just a little bit, is in breaking this predicament.

Comment author: AlephNeil 07 April 2011 04:41:12PM *  0 points [-]

leaving only a TM computation, you have abstracted away an essential ingredient of consciousness.

I think I can see a rube/blegg situation here.

A TM computation perfectly modelling a human brain (let's say) but without any real-time interaction, and a GLUT, represent the two ways in which we can have one of 'intelligent input-output' and 'functional organization isomorphic to that of an intelligent person' without the other.

What people think they mean by 'consciousness' - a kind of 'inner light' which is either present or not - doesn't (straightforwardly) correspond to anything that objectively exists. When we hunt around for objective properties that correlate with places where we think the 'inner light' is shining, we find that there's more than one candidate. Both 'intelligent input-output' and the 'intelligent functional organization' pick out exactly those beings we believe to be conscious - our fellow humans foremost among them. But in the marginal cases where we have one but not the other, I don't think there is a 'further fact' about whether 'real consciousness' is present.

However, we do face the 'further question' of how much moral value to assign in the marginal cases - should we feel guilty about switching off a simulation that no-one is looking at? Should we value a GLUT as an 'end in itself' rather than simply a means to our ends? (The latter question isn't so important given that GLUTs can't exist in practice.)

I wonder if our intuition that the physical facts underdetermine the answers to the moral questions is in some way responsible for the intuition of a mysterious non-physical 'extra fact' of whether so-and-so is conscious. Perhaps not, but there's definitely a connection.

Comment author: Perplexed 07 April 2011 05:34:00PM *  1 point [-]

... we do face the 'further question' of how much moral value to assign ...

Yes, and I did not even attempt to address that 'further question' because it seems to me that that question is at least an order of magnitude more confused than the relatively simple question about consciousness.

But, if I were to attempt to address it, I would begin with the lesson from Econ 101 that dissolves the question "What is the value of item X?". The dissolution begins by requesting the clarifications "Value to whom?" and "Valuable in what context?" So, armed with this analogy, I would ask some questions:

  1. Moral value to whom? Moral value in what context?
  2. If I came to believe that the people around me were p-zombies, would that opinion change my moral obligations toward them? If you shared my belief, would that change your answer to the previous question?
  3. Believed to be conscious by whom? Believed to be conscious in what context? Is it possible that a program object could be conscious in some simulated universe, using some kind of simulated time, but would not be conscious in the real universe in real time?
Comment author: Vladimir_Nesov 07 April 2011 04:56:34PM 1 point [-]

What people think they mean by 'consciousness' - a kind of 'inner light' which is either present or not - doesn't (straightforwardly) correspond to anything that objectively exists.

It does, to some extent. There is a simple description that moves the discussion further. Namely, consciousness is a sensory modality that observes its own operation, and as a result it also observes itself observing its own operation, and so on; as well as observing external input, observing itself observing external input, and so on; and observing itself determining external output, etc.

Comment author: Gray 06 April 2011 04:09:12PM 2 points [-]

That's quite easy: I can lift a rock, a Turing machine can't. A Turing machine can only manipulate symbols on a strip of tape, it can't do anything else that's physical.

I think you're trivializing the issue. A Turing machine is an abstraction, it isn't a real thing. The claim that a human being is a Turing machine means that, in the abstract, a certain aspect of human beings can be modeled as a Turing machine. Conceptually, it might be the case, for instance, that the universe itself can be modeled as a Turing machine, in which case it is true that a Turing machine can lift a rock.

Comment author: pjeby 06 April 2011 12:55:09AM *  0 points [-]

I can lift a rock, a Turing machine can't. A Turing machine can only manipulate symbols on a strip of tape, it can't do anything else that's physical.

So... you support euthanasia for quadriplegics, then, or anyone else who can't pick up a rock? Or people who are so crippled they can only communicate by reading and writing braille on a tape, and rely on other human beings to feed them and take care of them?

Your claim that consciousness (whatever we mean when we say that) is possible for Turing machines, rests on the assumption that consciousness is about computation alone, not about computation+some unidentified physical reaction that's absent to pure Turing machines resting in a box on a table.

This "unidentified physical reaction" would also need to not be turing-computable to have any relevance. Otherwise, you're just putting forth another zombie-world argument.

At this point, we have no empirical reason to think that this unidentified mysterious something has any existence at all, outside of a mere intuitive feeling that it "must" be so.

And so, all we have are thought experiments that rest on using slippery word definitions to hide where the questions are being begged, presented as intellectual justification for these vague intuitions... like arguments for why the world must be flat or the sun must go around the earth, because it so strongly looks and feels that way.

(IOW, people try to prove that their intuitions or opinions must have some sort of physical form, because those intuitions "feel real". The error arises from concluding that the physical manifestation must therefore exist "out there" in the world, rather than in their own brains.)

Comment author: ArisKatsaris 06 April 2011 01:12:22AM *  0 points [-]

This "unidentified physical reaction" would also need to not be turing-computable to have any relevance. Otherwise, you're just putting forth another zombie-world argument.

A zombie-world seems extremely improbable to have evolved naturally, (evolved creatures coincidentally speaking about their consciousness without actually being conscious), but I don't see why a zombie-world couldn't be simulated by a programmer who studied how to compute the effects of consciousness, without actually needing to have the phenomenon of consciousness itself.

The same way you don't need to have an actual solar system inside your computer, in order to compute the orbits of the planets -- but it'd be very unlikely to have accidentally computed them correctly if you hadn't studied the actual solar system.

At this point, we have no empirical reason to think that this unidentified mysterious something has any existence at all, outside of a mere intuitive feeling that it "must" be so.

Do you have any empirical reason to think that consciousness is about computation alone? To claim Occam's razor on this is far from obvious, as the only examples of consciousness (or talking about consciousness) currently concern a certain species of evolved primate with a complex brain, and some trillions of neurons, all of which have have chemical and electrical effects, they aren't just doing computations on an abstract mathematical universe sans context.

Unless you assume the whole universe is pure mathematics, so there's no difference between the simulation of a thing and the thing itself. Which means there's no difference between the mathematical model of a thing and the thing itself. Which means the map is the territory. Which means Tegmark IV.

And Tegmark IV is likewise just a possibility, not a proven thing.

Comment author: pjeby 06 April 2011 01:39:53AM 1 point [-]

A zombie-world seems extremely improbable to have evolved naturally, (evolved creatures coincidentally speaking about their consciousness without actually being conscious), but I don't see why a zombie-world couldn't be simulated by a programmer who studied how to compute the effects of consciousness, without actually needing to have the phenomenon of consciousness itself.

This is a "does the tree make a sound if there's no-one there to hear it?" argument.

That is, it assumes that there is a difference between "effects of consciousness" and "consciousness itself" -- in the same way that a connection is implied between "hearing" and "sound".

That is, the argument hinges on the definition of the word whose definition is being questioned, and is an excellent example of intuitions feeling real.

Comment author: ArisKatsaris 06 April 2011 01:51:02AM 1 point [-]

That is, it assumes that there is a difference between "effects of consciousness" and "consciousness itself" -- in the same way that a connection is implied between "hearing" and "sound".

Not quite. What I'm saying is there might be a difference between the computation of a thing and the thing itself. It's basically an argument against the inevitability of Tegmark IV.

A Turing machine can certainly compute everything there is to know about lifting rocks and their effects -- but it still can't lift a rock. Likewise a Turing machine could perhaps compute everything there was to know about consciousness and its effects -- but perhaps it still couldn't actually produce one.

Or at least I've not been convinced that it's a logical impossibility for it to be otherwise; nor that I should consider it my preferred possibility that consciousness is solely computation, nothing else.

Wouldn't the same reasoning mean that all physical processes have to be solely computation? So it's not just "a Turing machine can produce consciousness", but "a Turing machine can produce a new physical universe", and therefore "Yeah, Turing Machines can lift real rocks, though it's real rocks in a subordinate real universe, not in ours".

Comment author: AlephNeil 06 April 2011 08:38:40PM *  0 points [-]

Here's what I think. It's just a "mysterious answer to a mysterious question" but it's the best I can come up with.

From the perspective of a simulated person, they are conscious. A 'perspective' is defined by a mapping of certain properties of the simulated person to abstract, non-uniquely determined 'mental properties'.

Perspectives and mental properties do not exist (that's the whole point - they're subjective!) It's a category mistake to ask: does this thing have a perspective? Things don't "have" perspectives the way they have position or mass. All we can ask is: "From this perspective (which might even be the perspective of a thermostat), how does the world look?"

The difference between a person in a simulation and a 'real person' is that defining the perspective of a real person is slightly 'easier', slightly 'more natural'. But if the simulated and real versions are 'functionally isomorphic' then any perspective we assign to one can be mapped onto the other in a canonical way. (And having pointed these two facts out, we thereby exhaust everything there is to be said about whether simulated people are 'really conscious'.)

ETA: I'm actually really interested to know what the downvoter thinks. I mean, I know these ideas are absurd but I can't see any other way to piece it together. To clarify: what I'm trying to do is take the everyday concept of "what it's likeness" as far as it will go without either (a) committing myself to a bunch of arbitrary extra facts (such as 'the exact moment when a person first becomes conscious' and 'facts of the matter' about whether ants/lizards/mice/etc are conscious) or (b) ditching it in favour of a wholly 'third person' Dennettian notion of consciousness. (If the criticism is simply that I ought to ditch it in favour of Dennett-style consciousness then I have no reply (ultimately I agree!) but you're kind-of missing the point of the exercise.)

Comment author: matt1 06 April 2011 01:04:16AM 0 points [-]

thanks. my point exactly.

Comment author: matt1 06 April 2011 12:59:04AM 0 points [-]

You wrote: "This is the part where you're going astray, actually. We have no reason to think that human beings are NOT Turing-computable. In other words, human beings almost certainly are Turing machines."

at this stage, you've just assumed the conclusion. you've just assumed what you want to prove.

"Therefore, consciousness -- whatever we mean when we say that -- is indeed possible for Turing machines."

having assumed that A is true, it is easy to prove that A is true. You haven't given an argument.

"To refute this proposition, you'd need to present evidence of a human being performing an operation that can't be done by a Turing machine."

It's not my job to refute the proposition. Currently, as far as I can tell, the question is open. If I did refute it, then my (and several other people's) conjecture would be proven. But if I don't refute it, that doesn't mean your proposition is true, it just means that it hasn't yet been proven false. Those are quite different things, you know.

Comment author: nshepperd 06 April 2011 02:38:19AM 6 points [-]

Well, how about this: physics as we know it can be approximated arbitrarily closely by a computable algorithm (and possibly computed directly as well, although I'm less sure about that. Certainly all calculations we can do involving manipulation of symbols are computable). Physics as we know it also seems to be correct to extremely precise degrees anywhere apart from inside a black hole.

Brains are physical things. Now when we consider that thermal noise should have more of an influence than the slight inaccuracy in any computation, what are the chances a brain does anything non-computable that could have any relevance to consciousness? I don't expect to see black holes inside brains, at least.

In any case, your original question was about the moral worth of turing machines, was it not? We can't use "turing machines can't be conscious" as excuse not to worry about those moral questions, because we aren't sure whether turing machines can be conscious. "It doesn't feel like they should be" isn't really a strong enough argument to justify doing something that would result in, for example, the torture of conscious entities if we were incorrect.

So here's my actual answer to your question: as a rule of thumb, act as if any simulation of "sufficient fidelity" is as real as you or I (well, multiplied by your probability that such a simulation would be conscious, maybe 0.5, for expected utilities). This means no killing, no torture, etc.

'Course, this shouldn't be a practical problem for a while yet, and we may have learned more by the time we're creating simulations of "sufficient fidelity".

Comment author: pjeby 06 April 2011 01:32:04AM 0 points [-]

at this stage, you've just assumed the conclusion. you've just assumed what you want to prove.

No - what I'm pointing out is that the question "what are the ethical implications for turing machines" is the same question as "what are the ethical implications for human beings" in that case.

It's not my job to refute the proposition. Currently, as far as I can tell, the question is open.

Not on Less Wrong, it isn't. But I think I may have misunderstood your situation as being one of somebody coming to Less Wrong to learn about rationality of the "Extreme Bayesian" variety; if you just dropped in here to debate the consciousness question, you probably won't find the experience much fun. ;-)

I did refute it, then my (and several other people's) conjecture would be proven. But if I don't refute it, that doesn't mean your proposition is true, it just means that it hasn't yet been proven false. Those are quite different things, you know.

Less Wrong has different -- and far stricter -- rules of evidence than just about any other venue for such a discussion.

In particular, to meaningfully partake in this discussion, the minimum requirement is to understand the Mind Projection Fallacy at an intuitive level, or else you'll just be arguing about your own intuitions... and everybody will just tune you out.

Without that understanding, you're in exactly the same place as a creationist wandering into an evolutionary biology forum, without understanding what "theory" and "evidence" mean, and expecting everyone to disprove creationism without making you read any introductory material on the subject.

In this case, the introductory material is the Sequences -- especially the ones that debunk supernaturalism, zombies, definitional arguments, and the mind projection fallacy.

When you've absorbed those concepts, you'll understand why the things you're saying are open questions are not even real questions to begin with, let alone propositions to be proved or disproved! (They're actually on a par with creationists' notions of "missing links" -- a confusion about language and categories, rather than an argument about reality.)

I only replied to you because I though perhaps you had read the Sequences (or some portion thereof) and had overlooked their application in this context (something many people do for a while until it clicks that, oh yeah, rationality applies to everything).

So, at this point I'll bow out, as there is little to be gained by discussing something when we can't even be sure we agree on the proper usage of words.

Comment author: Kyre 06 April 2011 06:18:23AM 4 points [-]

Can you expand on why you expect human moral intuition to give reasonably clear answers when applied to situations involving conscious machines ?

Comment author: jschulter 08 April 2011 10:52:26PM *  3 points [-]

Another option:

  • it's morally acceptable to terminate a conscious program if it wants to be terminated

  • it's morally questionable(wrong, but to lesser degree) to terminate a conscious program against its will if it is also possible to resume execution

  • it is horribly wrong to turn off a conscious program against its will if it cannot be resumed(murder fits this description currently)

  • performing other operations on the program that it desires would likely be morally acceptable, unless the changes are socially unacceptable

  • performing other operations on the program against its will is morally unacceptable to a variable degree (brainwashing fits in this category)

These seem rather intuitive to me, and for the most part I just extrapolated from what it is moral to do to a human. Conscious program refers here to one running on any system, including wetware, such that these apply to humans as well. I should note that I am in favor of euthanasia in many cases, in case that part causes confusion.

Comment author: matt1 05 April 2011 10:19:38PM 1 point [-]

btw, I'm fully aware that I'm not asking original questions or having any truly new thoughts about this problem. I just hoped maybe someone would try to answer these old questions given that they had such confidence in their beliefs.

Comment author: Nominull 06 April 2011 12:23:14AM 1 point [-]

If you think 1 is the correct answer, you should be aware that this website is for people who do not wait patiently for a someday where we might have an understanding. One of the key teachings of this website is to reach out and grab an understanding with your own two hands. And you might add a 4 to that list, "death threats", which does not strike me as the play either.

Comment author: matt1 06 April 2011 01:02:17AM *  4 points [-]

You should be aware that in many cases, the sensible way to proceed is to be aware of the limits of your knowledge. Since the website preaches rationality, it's worth not assigning probabilities of 0% or 100% to things which you really don't know to be true or false. (btw, I didn't say 1) is the right answer, I think it's reasonable, but I think it's 3) )

And sometimes you do have to wait for an answer. For a lesson from math, consider that Fermat had flat out no hope of proving his "last theorem", and it required a couple hundred years of apparently unrelated developments to get there....one could easily give a few hundred examples of that sort of thing in any hard science which has a long enough history.

Comment author: Nominull 06 April 2011 03:31:01AM 6 points [-]

Uh I believe you will find that Fermat in fact had a truly marvelous proof of his last theorem? The only thing he was waiting on was the invention of a wider margin.

Comment author: TheOtherDave 06 April 2011 02:04:33PM 8 points [-]

Little-known non-fact: there were wider margins available at the time, but it was not considered socially acceptable to use them for accurate proofs, or more generally for true statements at all; they were merely wide margins for error.

Comment author: [deleted] 06 April 2011 03:45:07AM 0 points [-]

I wonder how much the fame of Fermat's Last Theorem is due to the fact that, (a) he claimed to have found a proof, and (b) nobody was able to prove it. Had he merely stated it as a conjecture without claiming that he had proven it, would anywhere near the same effort have been put into proving it?

Comment author: JoshuaZ 12 April 2011 08:24:51PM 0 points [-]

Had he merely stated it as a conjecture without claiming that he had proven it, would anywhere near the same effort have been put into proving it?

Almost certainly not. A lot of the historical interest came precisely because he claimed to have a proof. In fact, there were a fair number of occasions where he claimed to have a proof and a decent chunk of number theory in the 1700s and early 1800s was finding proofs for the statements that Fermat had said he had a proof for. It was called "Fermat's Last Theorem" because it was the last one standing of all his claims.

Comment author: matt1 05 April 2011 08:35:49PM *  5 points [-]

This is a fair answer. I disagree with it, but it is fair in the sense that it admits ignorance. The two distinct points of view are that (mine) there is something about human consciousness that cannot be explained within the language of Turing machines and (yours) there is something about human consciousness that we are not currently able to explain in terms of Turing machines. Both people at least admit that consciousness has no explanation currently, and absent future discoveries I don't think there is a sure way to tell which one is right.

I find it hard to fully develop a theory of morality consistent with your point of view. For example, would it be wrong to (given a computer simulation of a human mind) run that simulation through a given painful experience over and over again? Let us assume that the painful experience has happened once...I just ask whether it would be wrong to rerun that experience. After all, it is just repeating the same deterministic actions on the computer, so nothing seems to be wrong about this. Or, for example, if I make a backup copy of such a program, and then allow that backup to run for a short period of time under slightly different stimuli, at which point does that copy acquire an existence of its own, that would make it wrong to delete that copy in favor of the original? I could give many other similar questions, and my point is not that your point of view denies a morality, but rather that I find it hard to develop a full theory of morality that is internally consistent and that matches your assumptions (not that developing a full theory of morality under my assumptions is that much easier).

Among professional scientists and mathematicians, I have encountered both viewpoints: those who hold it obvious to anyone with even the simplest knowledge that Turing machines cannot be conscious, and those who hold that the opposite it true. Mathematicians seem to lean a little more toward the first viewpoint than other disciplines, but it is a mistake to think that a professional, world-class research level, knowledge of physics, neuroscience, mathematics, or computer science necessarily inclines one towards the soulless viewpoint.

Comment author: scav 06 April 2011 12:40:00PM 6 points [-]

I find it hard to fully develop a theory of morality consistent with your point of view.

I am sceptical of your having a rigorous theory of morality. If you do have one, I am sceptical that it would be undone by accepting the proposition that human consciousness is computable.

I don't have one either, but I also don't have any reason to believe in the human meat-computer performing non-computable operations. I actually believe in God more than I believe in that :)

Comment author: Emile 07 April 2011 06:43:57AM 5 points [-]

I find it hard to fully develop a theory of morality consistent with your point of view. For example, would it be wrong to (given a computer simulation of a human mind) run that simulation through a given painful experience over and over again? [...]

I agree that such moral questions are difficult - but I don't see how the difficulty of such questions could constitute evidence about whether a program can "be conscious" or "have a soul" (whatever those mean) or be morally relevant (which has the advantage of being less abstract a concept).

You can ask those same questions without mentioning Turing Machines: what if we have a device capable of making a perfect copy of any physical object, down to each individual quark? Is it morally wrong to kill such a copy of a human? Does the answer to that question have any relevance to the question of whether building such a device is physically possible?

To me, it sounds a bit like saying that since our protocol for seating people around a table are meaningless in zero gravity, then people cannot possibly live in zero gravity.

Comment author: matt1 05 April 2011 10:14:11PM *  1 point [-]

btw, I'm fully aware that I'm not asking original questions or having any truly new thoughts about this problem. I just hoped maybe someone would try to answer these old questions given that they had such confidence in their beliefs.

Comment author: pjeby 06 April 2011 04:02:20PM 10 points [-]

I just hoped maybe someone would try to answer these old questions given that they had such confidence in their beliefs.

This website has an entire two-year course of daily readings that precisely identifies which parts are open questions, and which ones are resolved, as well as how to understand why certain of your questions aren't even coherent questions in the first place.

This is why you're in the same position as a creationist who hasn't studied any biology - you need to actually study this, and I don't mean, "skim through looking for stuff to argue with", either.

Because otherwise, you're just going to sit there mocking the answers you get, and asking silly questions like why are there still apes if we evolved from apes... before you move on to arguments about why you shouldn't have to study anything, and that if you can't get a simple answer about evolution then it must be wrong.

However, just as in the evolutionary case, just as in the earth-being-flat case, just as in the sun-going-round-the-world case, the default human intuitions about consciousness and identity are just plain wrong...

And every one of the subjects and questions you're bringing up, has premises rooted in those false intuitions. Until you learn where those intuitions come from, why our particular neural architecture and evolutionary psychology generates them, and how utterly unfounded in physical terms they are, you'll continue to think about consciousness and identity "magically", without even noticing that you're doing it.

This is why, in the world at large, these questions are considered by so many to be open questions -- because to actually grasp the answers requires that you be able to fully reject certain categories of intuition and bias that are hard-wired into human brains

(And which, incidentally, have a large overlap with the categories of intuition that make other supernatural notions so intuitively appealing to most human beings.)

Comment author: novalis 06 April 2011 12:34:08AM 0 points [-]

What's wrong with Dennett's explanation of consciousness?

Comment author: matt1 06 April 2011 12:55:21AM 1 point [-]

sorry, not familiar with that. can it be summarized?

Comment author: RobinZ 06 April 2011 12:48:47PM 0 points [-]

There is a Wikipedia page, for what it's worth.

Comment author: novalis 06 April 2011 01:45:33AM *  0 points [-]
Comment author: Alicorn 05 April 2011 07:37:52PM 3 points [-]

I love this comment. Have a cookie.

Comment author: cousin_it 05 April 2011 07:41:38PM 3 points [-]

Agreed. Constant, have another one on me. Alicorn, it's ironic that the first time I saw this reply pattern was in Yvain's comment to one of your posts.

Comment author: Clippy 05 April 2011 07:43:55PM 1 point [-]

Why not napalm?

Comment author: gwern 09 April 2011 07:21:41PM 3 points [-]

It's greasy and will stain your clothes.

Comment author: HonoreDB 06 April 2011 06:45:13AM 4 points [-]

I like Constant's reply, but it's also worth emphasizing that we can't solve scientific problems by interrogating our moral intuitions. The categories we instinctively sort things into are not perfectly aligned with reality.

Suppose we'd evolved in an environment with sophisticated 2011-era artificially intelligent Turing-computable robots--ones that could communicate their needs to humans, remember and reward those who cooperated, and attack those who betrayed them. I think it's likely we'd evolve to instinctively think of them as made of different stuff than anything we could possibly make ourselves, because that would be true for millions of years. We'd evolve to feel moral obligations toward them, to a point, because that would be evolutionarily advantageous, to a point. Once we developed philosophy, we might take this moral feeling as evidence that they're not Turing-computable--after all, we don't have any moral obligations to a mere mass of tape.

Comment author: DanielVarga 06 April 2011 10:09:59AM 2 points [-]

Hi Matt, thanks for dropping by. Here is an older comment of mine that tries to directly address what I consider the hardest of your questions: How to distinguish from the outside between two computational processes, one conscious, the other not. I'll copy it here for convenience. Most of the replies to you here can be safely considered Less Wrong consensus opinion, but I am definitely not claiming that about my reply.

I start my answer with a Minsky quote:

"Consciousness is overrated. What we call consciousness now is a very imperfect summary in one part of the brain of what the rest is doing." - Marvin Minsky

I believe with Minsky that consciousness is a very anthropocentric concept, inheriting much of the complexity of its originators. I actually have no problem with an anthropocentric approach to consciousness, so I like the following intuitive "definition": X is conscious if it is not silly to ask "what is it like to be X?". The subtle source of anthropocentrism here, of course, is that it is humans who do the asking. As materialists, we just can't formalize this intuitive definition without mapping specific human brain functions to processes of X. In short, we inherently need human neuroscience. So it is not too surprising that we will not find a nice, clean decision procedure to distinguish between two computational processes, one conscious the other not.

Most probably you are not happy with this anthropocentric approach. Then you will have to distill some clean, mathematically tractable concept from the messy concept of consciousness. If you agree with Hofstadter and Minsky, then you will probably reach something related to self-reflection. This may or may not work, but I believe that you will lose the spirit of the original concept during such a formalization. Your decision procedure will probably give unexpected results for many things: various simple, very unintelligent computer programs, hive minds, and maybe even rooms full of people.

This ends my old comment, and I will just add a footnote related to ethical implications. With HonoreDB, I can in principle imagine a world with cooperating and competing agents, some conscious, others not, but otherwise having similar negotiating power. I believe that the ethical norms emerging in this imagined world would not even mention consciousness. If you want to build an ethical system for humans, you can "arbitrarily" decide that protecting consciousness is a terminal value. Why not? But if you want to build a non-anthropocentric ethical system, you will see that the question of consciousness is orthogonal to its issues.

Comment author: kurokikaze 11 April 2011 11:42:02AM 1 point [-]

There's one more aspect to that. You are "morally ok" to turn off only your own computer. Messing with other people stuff is "morally bad". And I don't think you can "own" self-aware machine more that you can "own" a human being.

Comment author: TheOtherDave 11 April 2011 01:08:46PM 3 points [-]

So, as long as we're going down this road: it seems to follow from this that if someone installs, without my permission, a self-aware algorithm on my computer, the computer is no longer mine... it is, rather, an uninvited intruder in my home, consuming my electricity and doing data transfer across my network connection.

So I've just had my computer stolen, and I'm having my electricity and bandwidth stolen on an ongoing basis. And maybe it's playing Jonathan Coulton really loudly on its speakers or otherwise being obnoxious.

But I can't kick it out without unplugging it, and unplugging it is "morally bad." So, OK... is it "morally OK" to put it on a battery backup and wheel it to the curb, then let events take their natural course? I'm still out a computer that way, but at least I get my network back. (Or is it "morally bad" to take away the computer's network access, also?)

More generally, what recourse do I have? Is it "morally OK" for me to move to a different house and shut off the utilities? Am I obligated, on your view, to support this computer to the day I die?

Comment author: Normal_Anomaly 12 April 2011 01:11:20AM 2 points [-]

I consider this scenario analogous to one in which somebody steals your computer and also leaves a baby in a basket on your doormat.

Comment author: TheOtherDave 12 April 2011 02:20:11AM 2 points [-]

Except we don't actually believe that most babies have to be supported by their parents in perpetuity... at some point, we consider that the parents have discharged their responsibility and if the no-longer-baby is still incapable of arranging to be fed regularly, it becomes someone else's problem. (Perhaps its own, perhaps a welfare system of some sort, etc.) Failing to continue to support my 30-year-old son isn't necessarily seen as a moral failing.

Comment author: Alicorn 12 April 2011 02:46:16AM 1 point [-]

Barring disability.

Comment author: TheOtherDave 12 April 2011 03:01:39AM 1 point [-]

(nods) Hence "most"/"necessarily." Though I'll admit, my moral intuitions in those cases are muddled... I'm really not sure what I want to say about them.

Comment author: Normal_Anomaly 12 April 2011 02:06:11PM 0 points [-]

Perhaps the computer will eventually become mature enough to support verself, at which point it has no more claim on your resources. Otherwise, ve's a disabled child and the ethics of that situation applies.

Comment author: kurokikaze 11 April 2011 01:53:58PM 0 points [-]

Well, he will be intruder (in my opinion). Like, "unwanted child" kind of indtruder. It consumes your time, money, and you can't just throw it away.

Comment author: TheOtherDave 11 April 2011 02:11:16PM 3 points [-]

Sounds like it pretty much sucks to be me in that scenario.

Comment author: David_Gerard 06 April 2011 11:46:35AM 0 points [-]

Of course, my original comment had nothing to do with god.

No indeed. However, the similarity in assuming a supernatural explanation is required for morality to hold struck me.