thomblake comments on Two straw men fighting - Less Wrong

2 Post author: JanetK 09 August 2010 08:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (157)

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 09 August 2010 03:18:50PM 1 point [-]

No, but surely some chunks of similarly-transparent code would appear in an algorithm for making decisions. And since I can read that code and know what it outputs without executing it, surely a superintelligence could read more complex code and know what it outputs without executing it. So it is patently false that in principle the AI will not be able to know the output of the algorithm without executing it.

Comment author: Unknowns 09 August 2010 03:27:51PM 1 point [-]

Any chunk of transparent code won't be the code for making an intelligent decision. And the decision algorithm as a whole won't be transparent to the same intelligence, but perhaps only to something still more intelligent.

Comment author: thomblake 09 August 2010 03:41:40PM 0 points [-]

Any chunk of transparent code won't be the code for making an intelligent decision.

Do you have a proof of this statement? If so, I will accept that it is not in principle possible for an AI to predict what its decision algorithm will return without executing it.

Of course, logical proof isn't entirely necessary when you're dealing with Bayesians, so I'd also like to see any evidence that you have that favors this statement, even if it doesn't add up to a proof.

Comment author: Unknowns 09 August 2010 03:54:53PM *  0 points [-]

It's not possible to prove the statement because we have no mathematical definition of intelligence.

Eliezer claims that it is possible to create a superintelligent AI which is not conscious. I disagree with this because it is basically saying that zombies are possible. True, he would say that he only believes that human zombies are impossible, not that zombie intelligences in general are impossible. But in that case he has no idea whatsoever what consciousness corresponds to in the physical world, and in fact has no reason not to accept dualism.

My position is more consistent: all zombies are impossible, and any intelligent being will be conscious. So it will also have the subjective experience of making decisions. But it is essential to this experience that you don't know what you're going to do before you do it; when you experience knowing what you're going to do, you experience deciding to do it.

Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions. And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm.

You may still disagree, but please note that this is entirely consistent with everything you and wedrifid have argued, so his claim that I have been refuted is invalid.

Comment author: Randaly 09 August 2010 04:52:35PM 1 point [-]

As I recall, Eliezer's definition of consciousness is borrowed from GEB- it's when the mind examines itself, essentially. That has very real physical consequences, so the idea of non-conscious AGI doesn't support the idea of zombies, which require consciousness to have no physical effects.

Comment author: Unknowns 09 August 2010 04:57:54PM 0 points [-]

Any AGI would be able to examine itself, so if that is the definition of consciousness, every intelligence would be conscious. But Eliezer denies the latter, so he also implicitly denies that definition of consciousness.

Comment author: JoshuaZ 09 August 2010 05:02:20PM 0 points [-]

Any AGI would be able to examine itself, so if that is the definition of consciousness, every intelligence would be conscious. But Eliezer denies the latter, so he also implicitly denies that definition of consciousness.

I'm not sure I am parsing correctly what you've wrote. It may rest with your use of the word "intelligence"- how are you defining that term?

Comment author: Unknowns 09 August 2010 05:03:31PM *  0 points [-]

You could replace it with "AI." Any AI can examine itself, so any AI will be conscious, if consciousness is or results from examining itself. I agree with this, but Eliezer does not.

Comment author: LucasSloan 10 August 2010 08:49:16AM *  0 points [-]

we have no mathematical definition of intelligence.

Yes we do, ability to apply optimization pressure in a wide variety of environments. The platonic ideal of which is AIXI.

Comment author: torekp 10 August 2010 01:43:46AM 0 points [-]

Eliezer claims that it is possible to create a superintelligent AI which is not conscious.

Can you please provide a link?

Comment author: Eliezer_Yudkowsky 10 August 2010 03:04:25AM 0 points [-]
Comment author: torekp 22 August 2010 03:35:56PM 0 points [-]

Thank you. I agree with Eliezer for reasons touched on in my comments to simplicio's Consciousness of simulations & uploads thread.

Comment author: thomblake 09 August 2010 04:07:24PM 0 points [-]

My position is more consistent: all zombies are impossible, and any intelligent being will be conscious. So it will also have the subjective experience of making decisions. But it is essential to this experience that you don't know what you're going to do before you do it; when you experience knowing what you're going to do, you experience deciding to do it.

Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions. And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm.

I don't have any problem granting that "any intelligent being will be conscious", nor that "It will have the subjective experience of making decisions", though that might just be because I don't have a formal specification of either of those - we might still be talking past each other there.

But it is essential to this experience that you don't know what you're going to do before you do it

I don't grant this. Can you elaborate?

when you experience knowing what you're going to do, you experience deciding to do it.

I'm not sure that's true, or in what sense it's true. I know that if someone offered me a million dollars for my shoes, I would happily sell them my shoes. Coming to that realization didn't feel to me like the subjective feeling of deciding to sell something to someone at the time, as compared to my recollection of past transactions.

Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions.

Okay, that follows from the previous claim.

And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm.

If I were moved to accept your previous claim, I would now be skeptical of the claim that "a block of code will not cause it to feel the sensation of deciding". Especially since we've already shown that some blocks of code would be capable of predicting some decision algorithms.

that block of code must be incapable of predicting its decision algorithm.

This follows, but I draw the inference in the opposite direction, as noted above.

Comment author: Unknowns 09 August 2010 04:19:59PM 0 points [-]

I would distinguish between "choosing" and "deciding". When we say "I have some decisions to make," we also mean to say that we don't know yet what we're going to do.

On the other hand, it is sometimes possible for you to have several options open to you, and you already know which one you will "choose". Your example of the shoes and the million dollars is one such case; you could choose not to take the million dollars, but you would not, and you know this in advance.

Given this distinction, if you have a decision to make, as soon as you know what you will or would do, you will experience making a decision. For example, presumably there is some amount of money ($5? $20? $50? $100? $300?) that could be offered for your shoes such that you are unclear whether you should take the offer. As soon as you know what you would do, you will feel yourself "deciding" that "if I was offered this amount, I would take it." It isn't a decision to do something concretely, but it is still a decision.