trist comments on L-zombies! (L-zombies?) - Less Wrong

21 Post author: Benja 07 February 2014 06:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (71)

You are viewing a single comment's thread.

Comment author: trist 07 February 2014 09:19:32PM *  10 points [-]

Are cryopreserved humans l-zombies?

keeping in mind that if they were an l-zombie, they would still say "I have conscious experiences, so clearly I can't be an l-zombie"?

As well they should. For l-zombies to do anything they need to be run, whereupon they stop being l-zombies.

Comment author: Benja 07 February 2014 10:30:15PM 2 points [-]

For l-zombies to do anything they need to be run, whereupon they stop being l-zombies.

Omega doesn't necessarily need to run a conscious copy of Eliezer to be pretty sure that Eliezer would pay up in the counterfactual mugging; it could use other information about Eliezer, like Eliezer's comments on LW, the way that I just did. It should be possible to achieve pretty high confidence that way about what Eliezer-being-asked-about-a-counterfactual-mugging would do, even if that version of Eliezer should happen to be an l-zombie.

Comment author: ThisSpaceAvailable 08 February 2014 07:39:23AM 3 points [-]

But you see Eliezer's comments because a conscious copy of Eliezer has been run. If I'm figuring out what output a program "would" give "if" it were run, in what sense am I not running it? Suppose I have a program MaybeZombie, and I run a Turing Test with it as the Testee and you as the Tester. Every time you send a question to MaybeZombie, I figure out what MaybeZombie would say if it were run, and send that response back to you. Can I get MaybeZombie to pass a Turing Test, without ever running it?

Comment author: Benja 10 February 2014 11:05:04PM 2 points [-]

But you see Eliezer's comments because a conscious copy of Eliezer has been run.

A conscious copy of Eliezer that thought about what Eliezer would do when faced with that situation, not a conscious copy of Eliezer actually faced with that situation -- the latter Eliezer is still an l-zombie, if we live in a world with l-zombies.

Comment author: alexey 08 February 2014 06:59:37PM 1 point [-]

If I'm figuring out what output a program "would" give "if" it were run, in what sense am I not running it?

In the sense of not producing effects on the outside world actually running it would produce. E.g. given this program

int goodbye_world() {
launch_nuclear_missiles();
return 0;
}

I can conclude running it would launch missiles (assuming suitable implementation of the launch_nuclear_missiles function) and output 0 without actually launching the missiles.

Comment author: FeepingCreature 08 February 2014 08:01:33PM 5 points [-]

Within the domain that the program has run (your imagination) missiles have been launched.

Comment author: ThisSpaceAvailable 09 February 2014 06:08:18AM 2 points [-]

Benja defines an l-zombie as "a Turing machine which, if anybody ever ran it..." A Turing Machine can't launch nuclear missiles. A nuclear missile launcher can be hooked up to a Turing Machine, and launch nuclear missile on the condition that the Turing Machine reach some state, but the Turing Machine isn't launching the missiles, the nuclear missile launcher is.

Comment author: alexey 10 February 2014 01:13:30PM 0 points [-]

But I can still do static analysis of a Turing machine without running it. E.g. I can determine a T.M. would never terminate on given input in finite time.

Comment author: ThisSpaceAvailable 16 February 2014 07:02:24AM *  1 point [-]

But my point is that at some point, a "static analysis" becomes functionally equivalent to running it. If I do a "static analysis" to find out what the state of the Turing machine will be at each step, I will get exactly the same result (a sequence of states) that I would have gotten if I had run it for "real", and I will have to engage in computation that is, in some sense, equivalent to the computation that the program asks for.

Suppose I write a program that is short and simple enough that you can go through it and figure out in your head exactly what the computer will do at each line of code. In what sense has your mind not run the program, but a computer that executes the program has?

Imagine the following dialog:

Alice: "So, you've installed a javascript interpreter on your machine?"

Bob: "Nope."

Alice: "But I clicked on this javascript program, and I got exactly what I was supposed to get."

Bob: "Oh, that's because I've associated javascript source code files with a program that looks at javascript code, determines what the output would be if the program had been run, and outputs the result."

Alice: "So... you've a installed a javascript interpreter."

Bob: "No. I told you, it doesn't run the program, it just computes what the result of the program would be."

Alice: "But that's what a javascript interpreter is. It's a program that looks at source code, determines what the proper output is, and gives that output."

Bob: "Yes, but an interpreter does that by running the program. My program does it by doing a static analysis."

Alice: "So, what is the difference? For instance, if I write a program that adds two plus two, what is the difference?"

Bob: "An interpreter would calculate what 2+2 is. My program calculates what 2+2 would be, if my computer had calculated the sum. But it doesn't actually calculate the sum. It just does a static analysis of a program that would have calculated the sum."

I don't see how, outside of a rarefied philosophical context, Bob wouldn't be found to be stark raving mad. It seems to me that you're arguing for p-zombies (at least, behavioral zombies): one could build a machine that, given any input, tells you what the output would be if it were a conscious being. Such a machine would be indistinguishable from an actually conscious being, without actually being conscious.

Comment author: alexey 16 February 2014 07:11:17AM *  2 points [-]

But my point is that at some point, a "static analysis" becomes functionally equivalent to running it. If I do a "static analysis" to find out what the state of the Turing machine will be at each step, I will get exactly the same result (a sequence of states) that I would have gotten if I had run it for "real", and I will have to engage in computation that is, in some sense, equivalent to the computation that the program asks for.

Crucial words here are "at some point". And Benja's original comment (as I understand it) says precisely that Omega doesn't need to get to that point in order to find out with high confidence what Eliezer's reaction to counterfactual mugging would be.

Comment author: alexey 10 February 2014 01:33:03PM 0 points [-]

Suppose I've seen records of some inputs and outputs to a program: 1->2, 5->10, 100->200. In every case I am aware of it was given a number as input, it output the doubled number. I don't have the program's source and or ability to access the computer it's actually running on. I form a hypothesis: if this program received input 10000, it would output 20000. Am I running the program?

In this case: doubling program<->Eliezer, inputs<->comments and threads he is answering, outputs<->his replies.

Comment author: Lumifer 10 February 2014 07:00:46PM 0 points [-]

Am I running the program?

No, you've built your model of the program and you're running your own model.

Comment author: Coscott 07 February 2014 09:48:40PM *  1 point [-]

I do not think your complaint is valid.

I can ask questions like what is the googolth digit of pi, without calculating it.

Similarly, you can ask questions whether a Turing Machine mind would believe it has conscious experience without actually running it.

Comment author: satt 08 February 2014 05:48:22PM 2 points [-]

I can certainly ask, "were this Turing machine mind to be run, would it believe it were conscious?" But that doesn't give me licence to assert that "because this Turing machine would be conscious if it were run, it is conscious even though it has not run, is not running, and never will run".

A handheld calculator that's never switched on will never tell us the sum of 6 and 9, even if we're dead certain that there's nothing wrong with the calculator.

Comment author: Coscott 08 February 2014 05:59:14PM 1 point [-]

I am not trying to say it would be conscious without being run. (Although I believe it would)

I am trying to say that the computation as an abstract function has an output which is the sentence "I believe I am conscious."

Comment author: satt 08 February 2014 06:07:20PM 3 points [-]

I am not trying to say it would be conscious without being run. [...] I am trying to say that the computation as an abstract function has an output which is the sentence "I believe I am conscious."

Now I think I agree with you. (Because I think you're using "output" here in the counterfactual sense.)

But now these claims are too weak to invalidate what trist is saying. If we all agree that l-zombies, being the analogue of the calculator that's never switched on, never actually say anything (just as the calculator never actually calculates anything), then someone who's speaking can't be an l-zombie (just as a calculator that's telling us 6 + 9 = 15 must be switched on).

Comment author: Coscott 08 February 2014 06:13:36PM 1 point [-]

Okay, I guess I interpreted trist incorrectly.

I agree that in order for the L-zombie to do anything in this world, it must be run. (Although I am very open to the possibility that I am wrong about that and prediction without simulation is possible)

Comment author: somervta 07 February 2014 10:07:29PM 0 points [-]

Well, yes, it would if you ran it :D

Comment author: Coscott 07 February 2014 10:15:01PM 4 points [-]

Pi has a googolth digit even if we don't run the calculation. A decision procedure has an output even if we do not run it. We just do not know what it is. I do not see the problem.

Comment author: ThisSpaceAvailable 08 February 2014 07:34:08AM 2 points [-]

No, a decision procedure doesn't have an output if you don't run it. There is something that would be the output if you ran it. But if you ran it, it would not be an l-zombie.

Let's give this program a name. Call it MaybeZombie. Benja is saying "If MaybeZombie is an l-zombie, then MaybeZombie would say 'I have conscious experiences, so clearly I can't be an l-zombie' ". Benja did not say "If MaybeZombie is an l-zombie, then if MaybeZombie were run, MaybeZombie would say 'I have conscious experiences, so clearly I can't be an l-zombie' ".

There is no case in which a program can think "I have conscious experiences, so clearly I can't be an l-zombie" and be wrong. You're trying to argue that based on a mixed counterfactual. You're saying "I'm sitting here in Universe A, and I'm imagining Universe B where there is this program MaybeZombie that isn't run, and the me-in-Universe-A is imagining the me-in-Universe-B imagining a Universe C in which MaybeZombie is run. And now the me-in-Universe-A is observing that the me-in-Universe-B would conclude that MaybeZombie-in-Universe-C would say 'I have conscious experiences, so clearly I can't be an l-zombie'."

You're evaluating "Does MaybeZombie say 'I have conscious experiences, so clearly I can't be an l-zombie'?" in Universe C, but evaluating "Is MaybeZombie conscious?" in Universe B. You're concluding that MaybeZombie is "wrong" by mixing two different levels of counterfactuals. The analogy to pi is not appropriate, because the properties of pi don't change depending on whether we calculate it. The properties of MaybeZombie do depend on whether MaybeZombie is run.

It is perfectly valid for any mind to say "I have conscious experiences, so clearly I can't be an l-zombie". The statement "I can't be an l-zombie" clearly means "I can't be an l-zombie in this universe".

Comment author: wedrifid 08 February 2014 11:09:24AM 2 points [-]

No, a decision procedure doesn't have an output if you don't run it. There is something that would be the output if you ran it.

I'm not sure that is a particularly useful way to carve reality. At best it means that we need another word for the thing that Coscott is referring to as 'output' that we can use instead of the word output. The thing Coscott is talking about is a much more useful thing when analysing decision procedures than the thing you have defined 'output' to mean.

Comment author: IlyaShpitser 08 February 2014 11:49:58AM *  3 points [-]

That's just a potential outcome, pretty standard stuff:

http://www.stat.cmu.edu/~fienberg/Rubin/Rubin-JASA-05.pdf

"What would happen if hypothetically X were done" is one of the most common targets in statistical inference. That's a huge chunk of what Fisher/Neyman had done (originally in the context of agriculture: "what if we had given this fertilizer to this plot of land?") This is almost a hundred years ago.

Comment author: Coscott 08 February 2014 07:49:56AM 1 point [-]

I do not understand how the properties of MaybeZombie depend on whether or not MaybeZombie is run.

Comment author: Baughn 08 February 2014 04:28:07PM 3 points [-]

Because consciousness isn't a property of MaybeZombie, it's a property of the process of running it?

Comment author: Coscott 08 February 2014 06:07:36PM 1 point [-]

No, a decision procedure doesn't have an output if you don't run it.

This made me think that he was talking about the property of the output, so my misunderstanding was relative to that interpretation.

I personally think that consciousness is a property of MaybeZombie, and that L-zombies do not make sense, but the position I was trying to defend in this thread was that we can talk about the theoretical output of a function without actually running that funciton. (We might not be able to talk very much, since perhaps we can't know the output without running it)

Comment author: ThisSpaceAvailable 09 February 2014 06:27:59AM 0 points [-]

We are getting in a situation similar to the Ontological Argument for God, in which an argument gets bogged down in equivocation. The question becomes: what is a valid predicate of MaybeZombie? One could argue that there is a distinction to be made between such predicates as "the program has a Kolmogorov complexity of less than 3^^^3 bits" on the one hand, versus such predicates as "the program has been run" on the other. The former is an inherent property, while the latter is extrinsic to the program, and in some sense is not a property of the program itself. And yet, grammatically at least, "has been run" is the predicate of "the program" in the sentence "the program has been run". If "has said 'I must not be a zombie' " is not a valid predicate of MaybeZombie, then talking about whether MaybeZombie has said 'I must not be a zombie' is invalid. If one can meaningfully talk about whether MaybeZombie has said 'I must not be a zombie', then "has said 'I must not be a zombie' " is a valid predicate of MaybeZombie. Since this predicate is obviously false if MaybeZombie isn't run, and could be true if MaybeZombie is run, then this is a property of MaybeZombie that depends on whether MaybeZombie is run.