It's too bad EY is deeply ideologically committed to a different position on AI, because otherwise his philosophy seems to very closely parallel John Searle's
Perhaps I'm confused, but isn't Searle the guy who came up with that stupid Chinese Room thing? I don't see at all how that's remotely parallel to LW philosophy, or why it would be a bad thing to be ideologically opposed to his approach to AI. (He seems to think it's impossible to have AI, after all, and argues from the bottom line for that position.)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
The whole thing: it's the Chinese Room all over again, a intuition pump that begs the very question it's purportedly answering. (Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word "understanding" is fudged in the Chinese Room argument, but basically it's the same.)
I suppose you could say that there's a grudging partial agreement with your point number two: that "the brain causes qualia". The rest of what you listed, however, is drivel, as is easy to see if you substitute some other term besides "qualia", e.g.:
It doesn't matter what term you plug into this in place of "qualia" or "free will", it could be "love" or "charity" or "interest in death metal", and it's still not saying anything more profound than, "I don't think machines are as good as real people, so there!"
Or more precisely: "When I think of people with X it makes me feel something special that I don't feel when I think of machines with X, therefore there must be some special quality that separates people from machines, making machine X 'just a simulation'." This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.
Specifically, the thing that drives these arguments is our inbuilt machinery that classifies things as mind-having or not-mind-having, for purposes of prediction-making. But the feeling that we get that a thing is mind-having or not-mind-having is based on what was useful evolutionarily, not on what the actual truth is. Searlian (Surly?) arguments are thus in exactly the same camp as any other faith-based argument: elevating one's feelings to Truth, irrespective of the evidence against them.
Huh? This isn't an argument for the existence of qualia -- it's an attempt to figure out whether you believe in qualia or not. So I take it you disagree with step one, that qualia exists? Do you think you are a philosophical zombie?
I do think essentially the same argument goes through for free will, so I don't find your reductio at all convincing. There's no reason, however, to believe that "love" or "charity" is a basic fact of physics, since it's fairly obvious how to reduce these. Do you think you can reduce qualia?
I don't understand why you think this is a claim about my feelings.