I doubt we can reductively analyse "thinking"
If we cant even get a start on that, how did we get a start on building AI?
I doubt we can reductively analyse "thinking"
If we cant even get a start on that, how did we get a start on building AI?
I'm not sure I follow you. Why would you need to analyse "thinking" in order to "get a start on building AI"? Presumably it's enough to systematize the various computational algorithms that lead to the behavioural/functional outputs associated with intelligent thought. Whether it's really thought, or mere computation, that occurs inside the black box is presumably not any concern of computer scientists!
What exactly do you mean by "mean"?
I couldn't help one who lacked the concept. But assuming that you possess the concept, and just need some help in situating it in relation to your other concepts, perhaps the following might help...
Our thoughts (and, derivatively, our assertions) have subject-matters. They are about things. We might make claims about these things, e.g. claiming that certain properties go together (or not). When I write, "Grass is green", I mean that grass is green. I conjure in my mind's eye a mental image of blades of grass, and their colour, in the image, is green. So, I think to myself, the world is like that.
Could a zombie do all this? They would go "through the motions", so to speak, but they wouldn't actually see any mental image of green grass in their mind's eye, so they could not really intend that their words convey that the world is "like that". Insofar as there are no "lights on inside", it would seem that they don't really intend anything; they do not have minds.
If you can understand the above two paragraphs, then it seems that you have a conception of meaning as a distinctively mental relation (e.g. that holds between thoughts and worldly objects or states of affairs), not reducible to any of the purely physical/functional states that are shared by our zombie twins.
So if we taboo "thinking" and "computing", what is it that brains are not doing?
You can probably give a functionalist analysis of computation. I doubt we can reductively analyse "thinking" (at least if you taboo away all related mentalistic terms), so this strikes me as a bedrock case (again, like "qualia") where tabooing away the term (and its cognates) simply leaves you unable to talk about the phenomenon in question.
I'm trying to figure out what work "meaning" is doing. Eliezer says brains are "thinking" meaningless gibberish. You dispute this by saying,
... mere brains never really mean anything, any more than squiggles of ink do; any meaning we attribute to them is purely derivative from the meaning of appropriately-related thoughts ...
But what are brains thinking, if not thoughts?
And then
But the fact that the squiggles are about consciousness (or indeed anything at all) depends crucially upon the epiphenomenal aspects of our minds, in addition.
This implies that "about"-ness and "meaning" have roughly the same set of properties. But I don't understand why anyone believes anything about "meaning" (in this sense). If it doesn't appear in the causal diagram, how could we tell that we're not living in a totally meaningless universe? Let's play the Monday-Tuesday game: on Monday, our thoughts are meaningful; on Tuesday, they're not. What's different?
But what are brains thinking, if not thoughts?
Right, according to epiphenomenalists, brains aren't thinking (they may be computing, but syntax is not semantics).
If it doesn't appear in the causal diagram, how could we tell that we're not living in a totally meaningless universe?
Our thoughts are (like qualia) what we are most directly acquainted with. If we didn't have them, there would be no "we" to "tell" anything. We only need causal connections to put us in contact with the world beyond our minds.
Where in the causal diagram does "meaning" go?
Meaning doesn't seem to be a thing in the way that atoms and qualia are, so I'm doubtful that the causal criterion properly applies to it (similarly for normative properties).
(Note that it would seem rather self-defeating to claim that 'meaning' is meaningless.)
Philosophers have a tendency to name pretty much every position that you can hold by accepting/refusing various "key" propositions. Epiphenomenalism tends to be reached by people frantically trying to hold on to their treasured beliefs about the way the mind works. Then they realise they can consistently be epiphenomenalists and they feel okay because it has a name or something.
Basically, it's a consistent position (well, Eliezer seems to think it's meaningless!), and so you want to go to some effort to show that it's actually wrong. Plus it's a good exercise to think about why it's wrong.
In my experience, most philosophers are actually pretty motivated to avoid the stigma of "epiphenomenalism", and try instead to lay claim to some more obscure-but-naturalist-friendly label for their view (like "non-reductive physicalism", "anomalous monism", etc.)
FWIW, my old post 'Zombie Rationality' explores what I think the epiphenomenalist should say about the worry that "the upper-tier brain must be thinking meaningless gibberish when the upper-tier lips [talk about consciousness]"
One point to flag is that from an epiphenomenalist's perspective, mere brains never really mean anything, any more than squiggles of ink do; any meaning we attribute to them is purely derivative from the meaning of appropriately-related thoughts (which, on this view, essentially involve qualia).
Another thing to flag is that epiphenomenalism needn't imply that our thoughts are causally irrelevant, but merely their experiential component. It'd be a mistake to identify oneself with just one's qualia (as Eliezer seems to attribute to the epiphenomenalist). It's true that our qualia don't write philosophy papers about consciousness. But we, embodied conscious persons, do write such papers. Of course, the causal explanation of the squiggles depends only on our physical parts. But the fact that the squiggles are about consciousness (or indeed anything at all) depends crucially upon the epiphenomenal aspects of our minds, in addition.
I assume the people arguing that "consciousness is caused by neurons" mean something similar to "the forest is caused by trees" and Eliezer is simply straw-manning/misinterpreting it.
Nope. Epiphenomenalism is motivated by the thought that you could (conceivably, in a world with different laws from ours) have the same bundles of neurons without any consciousness. You couldn't conceivably have the same bundles of trees not be a forest.
Did this ever happen? (If so, updating the OP with links would be very helpful.)
It sounds like "thinking" and "qualia" are getting the special privilege of being irreducible, even though there have been plenty of attempts to reduce them, and these attempts have had at least some success. Why can't I pick any concept and declare it a bedrock case? Is my cat fuzzy? Well, you could talk about how she is covered with soft fur, but it's possible to imagine something fuzzy and not covered with fur, or something covered with fur but not fuzzy. Because it's possible to imagine these things, clearly fuzziness must be non-physical. It's maybe harder to imagine a non-fuzzy cat than to imagine a non-thinking person, but that's just because fuzziness doesn't have the same aura of the mysterious that thinking and experiencing do.
Erm, this is just poor reasoning. The conclusion that follows from your premises is that the properties of fuzziness and being-covered-in-fur are distinct, but that doesn't yet make fuzziness non-physical, since there are obviously other physical properties besides being-covered-in-fur that it might reduce to. The simple proof: you can't hold ALL the other physical facts fixed and yet change the fuzziness facts. Any world physically identical to ours is a world in which your cat is still fuzzy. (There are no fuzz-zombies.) This is an obvious conceptual truth.
So, in short, the reason why you can't just "pick any concept and declare it a bedrock case" is that competent conceptual analysis would soon expose it to be a mistake.