thomblake comments on Two straw men fighting - Less Wrong

2 Post author: JanetK 09 August 2010 08:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (157)

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 09 August 2010 04:07:24PM 0 points [-]

My position is more consistent: all zombies are impossible, and any intelligent being will be conscious. So it will also have the subjective experience of making decisions. But it is essential to this experience that you don't know what you're going to do before you do it; when you experience knowing what you're going to do, you experience deciding to do it.

Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions. And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm.

I don't have any problem granting that "any intelligent being will be conscious", nor that "It will have the subjective experience of making decisions", though that might just be because I don't have a formal specification of either of those - we might still be talking past each other there.

But it is essential to this experience that you don't know what you're going to do before you do it

I don't grant this. Can you elaborate?

when you experience knowing what you're going to do, you experience deciding to do it.

I'm not sure that's true, or in what sense it's true. I know that if someone offered me a million dollars for my shoes, I would happily sell them my shoes. Coming to that realization didn't feel to me like the subjective feeling of deciding to sell something to someone at the time, as compared to my recollection of past transactions.

Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions.

Okay, that follows from the previous claim.

And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm.

If I were moved to accept your previous claim, I would now be skeptical of the claim that "a block of code will not cause it to feel the sensation of deciding". Especially since we've already shown that some blocks of code would be capable of predicting some decision algorithms.

that block of code must be incapable of predicting its decision algorithm.

This follows, but I draw the inference in the opposite direction, as noted above.

Comment author: Unknowns 09 August 2010 04:19:59PM 0 points [-]

I would distinguish between "choosing" and "deciding". When we say "I have some decisions to make," we also mean to say that we don't know yet what we're going to do.

On the other hand, it is sometimes possible for you to have several options open to you, and you already know which one you will "choose". Your example of the shoes and the million dollars is one such case; you could choose not to take the million dollars, but you would not, and you know this in advance.

Given this distinction, if you have a decision to make, as soon as you know what you will or would do, you will experience making a decision. For example, presumably there is some amount of money ($5? $20? $50? $100? $300?) that could be offered for your shoes such that you are unclear whether you should take the offer. As soon as you know what you would do, you will feel yourself "deciding" that "if I was offered this amount, I would take it." It isn't a decision to do something concretely, but it is still a decision.