Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Blueberry comments on Nonsentient Optimizers - Less Wrong

16 Post author: Eliezer_Yudkowsky 27 December 2008 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Blueberry 03 December 2009 03:45:17AM 1 point [-]

I'm not understanding this, and please direct me to the appropriate posts if I'm missing something obvious: but doesn't this contradict the Turing Test? Any FAI is an entity that we can have a conversation with, which by the Turing Test makes it conscious.

So if we're rejecting the Turing Test, why not just believe in zombies? The essence of the intuition captured by the Turing Test is that there is no "ghost in the machine", that everything we need to know about consciousness is captured in an interactive conversation. So if we're rejecting that, how do we avoid the conclusion that there is some extra "soul" that makes entities conscious, which we could just delete... which leads to zombies?

Comment author: Nick_Tarleton 03 December 2009 04:10:31AM *  3 points [-]

So if we're rejecting the Turing Test, why not just believe in zombies?

Zombies — of the sort that should be rejected — are structurally identical, not just behaviorally.

(Note that, whether or not zombies can hold conversations, it seems clear that behavior underdetermines the content of consciousness — that I could have different thoughts and different subjective experiences, but behave the same in all or almost all situations.)

(EDIT: Above said "content of conversations" not "content of consciousness" and made no sense.)

Comment author: Blueberry 03 December 2009 11:42:27PM 1 point [-]

Zombies — of the sort that should be rejected — are structurally identical, not just behaviorally.

I understand that usually when we talk about zombies, we talk about entities structurally identical to human beings. But it's a question of what level we put the "black box" at. For instance, if we replace every neuron of a human being with a behaviorally identical piece of silicon, we don't get a zombie. If we replace larger functional units within the brain with "black boxes" that return the same output given the same input, for instance, replacing the amygdala with an "emotion chip", do we necessarily get something conscious? Is this a zombie of the sort that should be rejected?

A non-conscious entity that can pass the Turing Test, if one existed, wouldn't even be behaviorally identical with any given human. So it wouldn't exactly be a zombie. But it does seem to violate a sort of Generalized Anti-Zombie Principle. I'm having trouble with this idea. Are there articles about this elsewhere on this site? EY's wholesale rejection of the Turing Test seems very odd to me.

(Note that, whether or not zombies can hold conversations, it seems clear that behavior underdetermines the content of consciousness — that I could have different thoughts and different subjective experiences, but behave the same in all or almost all situations.)

This doesn't seem at all clear. In all situations? So we have two entities, A and Z, and we run them through a battery of tests, resetting them to base states after each one. Every imaginable test we run, whatever we say to them, whatever strange situation we put them in, they respond in identical ways. I can see that it might be possible for them to have a few lines of code different. But for one to be conscious and one not to be? To say that consciousness can literally have zero effects on behavior seems very close to admitting the existence of zombies.

Maybe I should post in the Open Thread, or start a new post on this.

Comment author: pengvado 05 December 2009 01:15:14PM *  2 points [-]

Are there articles about this elsewhere on this site?

The Generalized Anti-Zombie Principle vs The Giant Lookup Table
Summary of the assertions made therein: If a given black box passes the Turing Test, that is very good evidence that there were conscious humans somewhere in the causal chain that led to the responses you're judging. However, that consciousness is not necessarily in the box.

Comment author: Blueberry 06 December 2009 08:25:58AM 0 points [-]

Exactly what i was looking for! Thank you so much.