Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

# CronoDAS comments on Zombies Redacted - Less Wrong

33 02 July 2016 08:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

You are viewing a single comment's thread.

Comment author: 02 July 2016 09:04:32PM 4 points [-]

Can you make "something" with the same input-output behavior as a human, and have that thing not be conscious? It doesn't have to be atom-by-atom identical.

Comment author: 02 July 2016 09:08:53PM 8 points [-]

Sure. Measure a human's input and output. Play back the recording. Or did you mean across all possible cases? In the latter case see http://lesswrong.com/lw/pa/gazp_vs_glut/

Comment author: 03 July 2016 12:33:17AM 3 points [-]

Yeah, I meant in all possible cases. Start with a Brain In A Vat. Scan that brain and implement a GLUT in Platospace, then hook up the Brain-In-A-Vat and the GLUT to identical robots, and you'll have one robot that's conscious and one that isn't, right?

Comment author: 05 July 2016 02:03:21PM 1 point [-]

Did you read the GAZP vs GLUT article ? In the GLUT setup, the conscious entity is the conscious human (or actually, more like googolplex of conscious humans) that produced the GLUT, and the robot replaying the GLUT is no more conscious than a phone transmitting the answer from a conscious human to another - which is basically what it is doing, replaying the answer given by a previous, conscious, human from the same input.

Comment author: 05 July 2016 05:55:20PM 0 points [-]

I don't think the origin of the GLUT matters at all. It could have sprung up out of pure randomness. The point is that it exists, and appears to be conscious by every outward measure, but isn't.

Comment author: 05 July 2016 09:34:37PM *  4 points [-]

It definitely does matter.

If you build a human-like robot, remotely controlled by a living human (or by a brain-in-a-vat), and interact with the robot, it'll appear to be conscious but isn't, and yet it wouldn't be a zombie in any way, what actually produces the response about being conscious would be the human (or the brain), not the robot.

If the GLUT was produced by a conscious human (or conscious human simulation), then it's akin to a telepresence robot, only slightly more remote (like the telepresence robot is only slightly more remote than a phone).

And if it "sprung up of pure randomness"... if you are ready to accept such level of improbability, you can accept anything - like the hypothesis that no human actually wrote what I'm replying to, but it's just the product of cosmic rays hitting my computers in the exact pattern for such a text to be displayed on my browser. Or the Shakespear was actually written by monkeys typing at random. If you start accepting such ridiculous levels of improbability, something even below than one chance in a googolplex, you are just accepting everything and anything making all attempt to reason or discuss pointless.

Comment author: 09 July 2016 06:23:00AM *  0 points [-]

The question is whether the GLUT is conscious. I don't believe that it is.

Perhaps it was created by a conscious process. But that process is gone now. I don't believe that torturing the GLUT is wrong, for example, because the conscious entity has already been tortured. Nothing I do to the GLUT can causally interact with the conscious process that created it.

This is why I say the origin of the GLUT doesn't matter. I'm not saying that I believe GLUTs are actually likely to exist, let alone appear from randomness. But the origin of a thing shouldn't matter to the question of whether or not it is conscious.

If we can observe every part of the GLUT, but know nothing about it's origin, we should still be able to determine if it's conscious or not. The question shouldn't depend on its past history, but only it's current state.

I believe it might be possible for a non conscious entity to create a GLUT, or at least fake consciousness. Like a simple machine learning algorithm that imitates human speech or text. Or AIXI with it's unlimited computing power, that doesn't do anything other than brute force. I wouldn't feel bad about deleting an artificial neural network, or destroying an AIXI.

The question that bothers me is what about a bigger, more human like neural network? Or a more approximate, less brute force version of AIXI? When does an intelligence algorithm gain moral weight? This question bothers me a lot, and I think it's what people are trying to get at when they talk about GLUTs.

Comment author: 18 July 2016 04:21:21PM *  1 point [-]

So, the question being asked here appears to be, "Can a GLUT be considered conscious?" I claim that this question is actually a stand-in for multiple different questions, each of which I will address individually.

1) Do the processes that underlie the GLUT's behavior (input/output) cause it to possess subjective awareness?

Without a good understanding of what exactly "subjective awareness" is and how it arises, this question is extremely difficult to answer. At a glance, however, it seems intuitively plausible (indeed, probable) that whatever processes underlie "subjective awareness", they need to be more complex than simply looking things up in an (admittedly enormous) database. So, I'm going to answer this one with a tentative "no".

2) Does the GLUT's existence imply the presence of consciousness (subjective awareness) elsewhere in the universe?

To answer this question, let's consider the size of a GLUT that contains all possible inputs and outputs for a conscious being. Now consider the set of all possible GLUTs of that size. Of those possible GLUTs, only a vanishingly minuscule fraction encode anything even remotely resembling the behavior of a conscious being. The probability of such a GLUT being produced by accident is virtually 0. (I think the actual probability should be on the order of 1 / K, where K is the Kolmogorov complexity of the brain of the being in question, but I could be wrong.)

As such, it's more or less impossible for the GLUT to have been produced by chance; it's indescribably more likely that there exists some other conscious process in the universe from which the GLUT's specifications were taken. In other words, if you ever encounter a GLUT that seems to behave like a conscious being, you can deduce with probability ~1 that consciousness exists somewhere in that universe. Thus, the answer to this question is "yes" with probability ~1.

3) Assuming that the GLUT was produced by chance and that the conscious being whose behavior it emulates does not and will not ever physically exist, can it still be claimed that the GLUT's existence implies the presence of consciousness somewhere?

This is the most ill-defined question of the lot, but hopefully I at least managed to render it into something comprehensible (if not easily answered!). To answer it, first we have to understand that while a GLUT may not be conscious itself, it certainly encodes a conscious process, i.e. you could theoretically specify a conscious process embedded in a physical medium (say, a brain, or maybe a computer) that, when run with a certain input, will produce the exact output that the GLUT produces given that input. (This is not a trivial statement, by the way; the set of GLUTs that fulfill this condition is tiny relative to the space of possible GLUTs.)

However, suppose we don't have that process available to us, only the GLUT itself. Then the question above is simply asking, "In what sense can the process encoded by the GLUT be said to 'exist'?" This is still a hard question, but it has one major advantage over the old phrasing: we can draw a direct parallel between this question and the debate over mathematical realism. In other words: if you accept mathematical realism, you should also be fine with accepting that the conscious process encoded by the GLUT exists in a Platonic sense, and if you reject it, you should likewise reject the existence of said process. Now, like most debates in philosophy, this one is unsettled--but at least now you know that your answer to the original question regarding GLUTs concretely depends on your answer to another question--namely, "Do you accept mathematical realism?", rather than nebulously floating out there in the void. (Note that since I consider myself a mathematical realist, I would answer "yes" to both questions. Your answer may differ.)

4) Under standard human values (e.g. the murder of a conscious being is generally considered immoral, etc.), should the destruction of a GLUT be considered immoral?

In my opinion, this question is actually fairly simple to answer. Recall that a GLUT, while not being conscious itself, encodes a conscious process. This means (among other things) that we could theoretically use the information contained in the look-up table to construct that conscious being, even if that being never existed before hand. Since destroying the GLUT would remove our ability to construct said being, we can clearly classify it as an immoral act (though whether it should be considered as immoral as the murder of a preexisting conscious being is still up for debate).

It seems to me that the four questions listed above suffice to describe all of the disguised queries the original question ("Can a GLUT be considered conscious?") stood for. Assuming I answered each of them in a sufficiently thorough manner, the original question should be resolved as well--and ideally, there shouldn't even be the feeling that there's a question left. Of course, that's if I did this thing correctly.

So, did I miss anything?

Comment author: 04 July 2016 10:14:40AM 0 points [-]

Hmm... is that true?
The only difference is that they were conscious at different time.
Also, creating a GLUT out of a person is an extremely immoral thing to do.