Randaly comments on Two straw men fighting - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (157)
It's not possible to prove the statement because we have no mathematical definition of intelligence.
Eliezer claims that it is possible to create a superintelligent AI which is not conscious. I disagree with this because it is basically saying that zombies are possible. True, he would say that he only believes that human zombies are impossible, not that zombie intelligences in general are impossible. But in that case he has no idea whatsoever what consciousness corresponds to in the physical world, and in fact has no reason not to accept dualism.
My position is more consistent: all zombies are impossible, and any intelligent being will be conscious. So it will also have the subjective experience of making decisions. But it is essential to this experience that you don't know what you're going to do before you do it; when you experience knowing what you're going to do, you experience deciding to do it.
Therefore any AI that runs code capable of predicting its decisions, will at that very time subjectively experience making those decisions. And on the other hand, given that a block of code will not cause it to feel the sensation of deciding, that block of code must be incapable of predicting its decision algorithm.
You may still disagree, but please note that this is entirely consistent with everything you and wedrifid have argued, so his claim that I have been refuted is invalid.
As I recall, Eliezer's definition of consciousness is borrowed from GEB- it's when the mind examines itself, essentially. That has very real physical consequences, so the idea of non-conscious AGI doesn't support the idea of zombies, which require consciousness to have no physical effects.
Any AGI would be able to examine itself, so if that is the definition of consciousness, every intelligence would be conscious. But Eliezer denies the latter, so he also implicitly denies that definition of consciousness.
I'm not sure I am parsing correctly what you've wrote. It may rest with your use of the word "intelligence"- how are you defining that term?
You could replace it with "AI." Any AI can examine itself, so any AI will be conscious, if consciousness is or results from examining itself. I agree with this, but Eliezer does not.