The overall message of the post sounds reasonable to me, but doesn't it apply to Pearl's causality as well? If the world is built from computation rather than causal arrows, how do you get to causal arrows?
If the world is built from computation rather than causal arrows, how do you get to causal arrows?
I had to do a bit of searching, but it seems that Eliezer (or at least Eliezer_2008) considers causal arrows to be more fundamental than computations:
And, if we have "therefore" back, if we have "cause" and "effect" back—and science would be somewhat forlorn without them—then we can hope to retrieve the concept of "computation". We are not forced to grind up reality into disconnected configurations; there can be glue between them. We can require the amplitude relations between connected volumes of configuration space, to carry out some kind of timeless computation, before we decide that it contains the timeless Now of a conscious mind.
So here's my understanding of Eliezer_2008's guess of how all the reductions would work out: mind reduces to computation which reduces to causal arrows which reduces to some sort of similarity relationship between configurations, and the universe fundamentally is a (timeless) set of configurations and their amplitudes.
Interestingly, Pearl himself doesn't seem nearly as ambitious about how far to push the reduction of "causality" and explains that his theory
takes the physical notions of “mechanisms”, “variables”, “measurements” and “interventions” as the basic primitives.
which bears almost no resemblance to Eliezer's idea of reducing causality to similarity.
I still don't understand what Barbour's theory actually says, and if it says anything at all. It seems to be one of Eliezer's more bizarre endorsements.
Does this explain it for you, or are you looking for something more detailed?
Barbour is speculating that if we solve the Wheeler-DeWitt equation, we'll get a single probability distribution over the configuration space of the universe, and all of our experiences can be explained using this distribution alone. Specifically, we don't need a probability distribution for each instant of time, like in standard QM.
Yeah, that approach seems overcomplicated to me. We shouldn't ask whether a chunk of matter or information "contains" a conscious mind, we should ask how much it contributes to the experiences of a conscious mind. The most obvious answer is that the contribution depends on how easy it is to compute the mind given that chunk of matter or information, or vice versa. Of course I'm handwaving a lot here, but having actual causal arrows in the territory doesn't seem to be required, you just need laws of physics that are simple to compute.
If there was a repository of philosophical work along those lines - not concerned with defending basic ideas like anti-zombieism, but with accepting those basic ideas and moving on to challenge more difficult quests of naturalism and cognitive reductionism - then that, I might well be interested in reading. But I don't know who, besides a few heroes, would be able to compile such a repository - who else would see a modal logic as an obvious bounce-off-the-mystery.
One of the facts of modern philosophy is that zombieism has not been resolved in a satisfactory manner. You can't simply claim that one idea is the most accurate one and run with it, because then you're using an intermediate argument of dubious provenance. You could avoid the question altogether, particularly in AI design: If two programs have identical outputs across the entire range of meaningful input, then it cannot be the case that one is self-aware and the other is not.
It certainly has been resolved. At least to the degree that anyone with a lick of sense can look at the pro-zombie arguments and say 'that is blatantly unphysical nonsense.' We can talk about consciousness. The atoms that make up my fingers can interact with the atoms that make up my nervous system, which can in turn, brain, etc, etc, and this unbroken causal chain make me talk about feeling conscious inside, which I do. We may not know how it works computationally, but we do know that it's something in the brain that's doing it. Or, at least, something is making us talk about consciousness. It is possible, I suppose, that the thing that makes us conscious is different from the thing that makes us talk about consciousness -- but there's certainly no evidence for it, and it's a damned silly idea in any case. So, as far as the naive 'zombies physically identical to humans' goes, if you don't consider the idea shot down, decapitated, plucked, gutted, and served for Christmas dinner, then that tells you more about the flaws in your criteria for drawing a conclusion than anything else.
There are some related questions worth exploring - like, for instance, once we figure out how consciousness works on a mechanical level, we can answer the question of whether or not it's possible to build a piece of software that reasonably impersonates a human being without having subjective experience. That's an interesting question. But the classical philosophical zombies are, frankly, stupid.
It is possible, I suppose, that the thing that makes us conscious is different from the thing that makes us talk about consciousness -- but there's certainly no evidence for it, and it's a damned silly idea in any case.
True, but it seems to me almost trivially true that explaining why we talk about consciousness makes a theory positing that we "are conscious" otiose. What other evidence is there? What other evidence could there be? The profession of belief in mysterious "raw experience" merely expresses a cognitive bias, the acceptance of which should be a deep embarrassment to exponents who call themselves rationalist.
The term "self-awareness," however, is quite misleading. I can have awareness of my inner states--some knowledge about what I'm thinking--without having mysterious raw experience. "Self-awareness" here is used by raw-experience believers to mean something special: knowledge of "what it is like to be me." (Thomas Nagel.) The ambiguous usage of self-awareness obfuscates the problem, making belief in "raw experience" seem reasonable, when it's really a believed (and beloved) superstition.
Now, given an explanation for how subjective experience occurs, determine if a given physical entity has subjective experience. What would be different in your observations if I did not have subjective experiences?
Give me that explanation, and I'll tell you. It's clearly some kind of computational / information process, but it's not clear exactly what's going on there. It has a serve a survival purpose, or else we wouldn't have it. We'll probably be able to conduct experiments and find out down the line, but it's tough right now. I also suspect that subjective experience isn't a linear cutoff. It's probably a gradient of depth of insight that extends to organisms with simpler nervous systems and extends, at least in principle, past humans. But that's speculation on my part.
It isn't an information process, it's a chemical process- because information can't trigger a neuron.
I see no reason why subjective experience needs to have had a survival purpose in the past; isn't it also possible that self-awareness was a contra-survival byproduct of some other function, which was prosurvivial in the distant past? I don't think that sentience is the appendix of the mind, but "because we have it" isn't in the list of evidence against that hypothesis.
Suppose that we figured out how the enconding of the sensory and motor nerves, such that we could interpret them and duplicate them: Then we put a human brain in a box, wired it to false nerves and provided it with an internally consistent set of sensory inputs that reacted to the motor outputs. I see no reason why that brain would have less subjective experience in that state than normal. (If you do, then disagree with me on this point, and it becomes open to verification)
Take the other example- a computer which can pass the Turing test is wired into a human body, taking the sensory nervous inputs as its inputs and the motor nerve outputs as its outputs, and other humans cannot tell without inspecting inside the skull that it is an artificial computer. Depending on your position on zombieism, this entity may or may not have subjective experience.
Now, take the zombie computer, and hook it up to the false nervous inputs: If it didn't have subjective experience in a real body, then it doesn't have it now; if so, why does a human brain have subjective experience, given that it takes the same inputs and provides the same outputs? If It did have subjective experience in a real body, but doesn't have it now, why the change, since nothing within the entity to be tested is different? If it still has subjective experience, then at least one computer simulation of a human interacting with a computer simulation of a world has subjective experience. Why is it not the case of all such simulations?
Today's post, Against Modal Logics was originally published on 27 August 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Dreams of AI Design, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.