Qualia are physical phenomena. I dearly wish that this statement were uncontroversial. However, if you don't agree with it, then you can reject the simulation argument on far simpler grounds: if experiencing qualia requires a "nonphysical" "soul" or whatnot (I don't know how to make sense out of either of those words), then there is no reason to suppose that any man-made simulator is imbued with a soul and therefore no reason to suppose that it would be conscious. However, provided that you agree that qualia are physical phenomena, then to suppose that they are any kind of exception to the principle I've just stated is simply bizarre magical thinking. A simulator which reasons perfectly about a human being, even including correctly determining what qualia a human would experience, does not necessarily experience those qualia, any more than a simulator that reasons perfectly about high gravity necessarily produces high gravity.
Let's replace qualia with some other phenomenon closely associated with the mind but less confusing. How about this: a poem. A really good poem, the sort of poem that we have not seen from anyone but the greatest human poets working at t...
A general principle: if you find that a certain premise is just so obvious that you can't reduce it any further, and yet other smart people, exposed to the same background sources, don't agree that it's obvious (or think that it's false)... that's a signal that you haven't yet reduced the premise well enough to rely on it in practical matters.
Qualia are very confusing for people to talk and think about, and so using your intuitions about them as a knock-down argument for any other conclusion is probably ill-advised.
Qualia are physical phenomena.
Yes, qualia are physical. But what does physical mean??
Physical means 'interacting with us in the simulation'.
To us, the simulated Jupiters are not physical -- they do not exert a real gravitational force -- because we are not there with them in the simulation. However, if you add a moon to your simulation, and simulate its motion towards the spheres, the simulated moon would experience the real, physical gravity of the moons.
For a moment, my intuition argued that it isn't 'real' gravity because the steps of the algorithm are so arbitrary -- there are so many ways to model the motion of the moon towards the spheres why should any one chosen way be privileged as 'real'? But then, think of it from the point of view of the moon. However the moon's position is encoded, it must move toward the spheres. Because this is hard-coded into the algorithm. From the point of view of the moon (and the spheres, incidentally) this path and this interaction is entirely immutable. This is what 'real', and what 'physical', feels like.
Mostly, discussions of this subject always feel to me like an exercise in redirecting attention, like doing stage magic.
Some things are computations, like calculating the product of 1234 and 5678. Computations are substrate-independent.
I am willing to grant that what the mass of Jupiter does when I'm attracted to it is not a mere computation. (I gather that people like Tegmark would disagree, but I don't even really understand what they mean by their disagreement.)
I certainly agree that if what my brain does when I experience something is not a mere computation, then the idea of instantiating that computation on a different substrate is incoherent and certainly does not reproduce what my brain does. (I don't care whether we call that thing "consciousness" or "qualia" or "pinochle.")
From that point on, it just seems that people build elaborate rhetorical structures to shift people's intuitions to the "my brain is doing something more like calculating the product of 1234 and 5678" or the "my brain is doing something more like exerting gravitational attraction on the moons of Jupiter" side.
Personally, I'm on the "more like calcu...
This proves that we cannot be in a simulation by... assuming we are not in a simulation.
Even granting you all of your premises, everything we know about brains and qualia we know by observing it in this universe. If this universe is in fact a simulation, then what we know about brains and qualia is false. At the very most, your argument shows that we cannot create a simulation. It does not prove that we cannot be in a simulation, because we have no idea what the physics of the real world would be like.
I'm also rather unconvinced as to the truth of your premises. Even if qualia are a phenomenon of the physical brain, that doesn't mean you can't generate a near-identical phenomenon in a different substrate. In general, John Searle has some serious problems when it comes to trying to answer essentially empirical questions with a priori reasoning.
I didn't claim we cannot be in a simulation.
Then the title, "We are not living in a simulation" was rather poorly chosen.
Deductive logic allows me to reliably predict that a banjo will fall if I drop it, even if I have never before observed a falling banjo, because I start with the empirically-acquired prior that, in general, dropped objects fall.
Observation gives you, "on Earth, dropped objects fall." Deduction lets you apply that to a specific hypothetical. You don't have observation backing up the theory you advance in this article. You need, "Only biological brains can have qualia." You have, "Biological brains have qualia." Big difference.
Ultimately, it seems you're trying to prove a qualified universal negative - "Nothing can have qualia, except biological brains (or things in many respects similar)." It is unbelievably difficult to prove such empirical claims. You'd need to try really hard to make something else have qualia, and then if you failed, the most you could conclude is, "It seems unlikely that it is possible for non-biological brains to have qualia." This is what I mean when I disparage Searle; many of his claims require mountains of evidence, yet he thinks he's resolved them from his armchair.
If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don't have any particular objection to purple and the weather is not nearly severe enough to pose any serious threat to my health, I wouldn't be able to answer you or in any way connect my qualia to my actions.
But in the simulation, you WOULD have an objection to purple, and you would call purple "hot", right? Or is this some haywire simulation where the simulated people act normally except they're completely baffled as to why they're doing any of it? Either what you're saying is incredibly stupid, or I don't understand it. Wait, does that mean I'm in a simulation?
Run a cable from the computer to the spark plug array, and have the program fire the spark plugs in the same sequence that it predicts that synapses would occur in a biological human brain. As these firings occurred, the array would experience human-like qualia. The same qualia would not result if the simulator merely computed what plugs ought to fire without actually firing them.
This would imply that qualia are epiphenomenal. If so, and when people talk about qualia they are accurately reporting them, without the epiphenomenal qualia causing the accurate report, where does that improbability come from?
The idea is that If you were simulated on that computer and someone asked you to describe your qualia, you could do it perfectly - despite having not qualia! This is a bit magical.
Your position within our universe is giving you a bias toward one side of a mostly symmetrical situation.
Let's throw out the terms "real" and "simulated" universe and call them the "parent" and "child" universe.
Gravity in the child universe doesn't affect the parent universe, true; creating a simulation of a black hole doesn't suck the simulating computer into the event horizon. But gravity in the parent universe doesn't affect the child universe either - if I turn my computer upside-down while playing SimCity, it doesn't make my Sims scream and start falling into the sky as their city collapses around them. So instead of saying "simulated gravity isn't real because it can't affect the real universe", we say "both the parent and child universes have gravity that only acts within their own universe, rather than affecting the other."
Likewise, when you say that you can't point to the location of a gravitional force within the simulation so it must be "nowhere" - balderdash. The gravitational force that's holding Sim #13335 to the ground in my SimCity game is happening on Oak Street, right between the park and the co...
The reason we think intelligence is substrate-independent is that the properties we're interested in (the ones we define to constitute "intelligence") do not make reference to any substrate. Can a simulation of a brain design a aeroplane? Yes. Can a simulation of a brain prove Pythagoras' theorem? Yes. Can a simulation of a brain plan strategically in the presence of uncertainty? Yes. These are the properties we mean when we say "intelligence". Under a different definition for "intelligence" that stipulates "composed of neurons" or "looks grey and mushy", intelligence is not substrate-independent. It's just a word game.
If I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved. If you believe that "embodying the same computation" is somehow a privileged concept in this regard -- that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed -- what is your justification for believing this?
The two apples in the head of your strawman have the same cardinality as the two hemispheres of your brain, but what ...
Community: clarifications like this are vital, and to be encouraged. Please don't downvote them.
A human brain is a computer. The brain of a living human differs from a dead one in that it's running a program. If the universe is as it seems, running a program on a computer causes qualia.
If the simulation hypothesis is true, human brains are still programs; they're just running on different computers.
Unless you have some reason qualia would be more likely to occur when the program is run on one of those computers than the other, you have no evidence about the simulation hypothesis.
I'm confused about exactly what qualia are, but I feel reasonably sure that they are related to information processing in somewhat the same way that high gravity and things moving through space are related. Substrate independence of qualia immediately follows from this point of view without any need to assert that qualia are not physical phenomena.
The original post takes the trouble to define "simulation" but not "qualia". The argument would make much more sense to me if it offered a definition of "qualia" precise enough to determine whether a simulated being does or does not have qualia, since that's the crux of the argument. I'm not aware of a commonly accepted definition that is that precise.
As it stands, I had to make sure as I was reading it to keep in mind that I didn't know what the author meant by "qualia", and after discarding all the statements using that undefined term the remainder didn't make much sense.
If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don't have any particular objection to purple and the weather is not nearly severe enough to pose any serious threat to my health, I wouldn't be able to answer you or in any way connect my qualia to my actions.
If a simulation allowed life to evolve within it, and was not just an attempt to replicate something which already exists, would...
I will do a more in-depth reading when I have time, but from a quick skim:
If you're basing your argument against a simulation universe on qualia, what do you say to those of us who reject qualia?
a computer program could make those judgements (sic) without actually experiencing any of those qualia
Just as an FYI, this is the place where your intuition is blindsiding you. Intuitively, you "know" that a computer isn't experiencing anything... and that's what your entire argument rests on.
However, this "knowing" is just an assumption, and it's assuming the very thing that is the question: does it make sense to speak of a computer experiencing something?
And there is no reason apart from that intuition/assumption, to treat this as a different question from, "does it make sense to speak of a brain experiencing something?".
IOW, substitute "brain" for every use of "computer" or "simulation", and make the same assertions. "The brain is just calculating what feelings and qualia it should have, not really experiencing them. After all, it is just a physical system of chemicals and electrical impulses. Clearly, it is foolish to think that it could thereby experience anything."
By making brains special, you're privileging the qualia hypothesis based on an intuitive assumption.
the type of qualia that a simulator actually produces (if any) depends crucially on the actual physical form of that simulator.... [to simulate humans] the simulator must physically incorporate a human brain.
It seems like the definition of "physical" used in this article is "existing within physics" (a perfectly reasonable definition). By this definition, phenomena such as qualia, reasoning, and computation are all "physical" and are referred to as such in the article itself.
Brains are physical, and local physics seems Tu...
Maybe I've missed something in your original article or your comments, but I don't understand why you think a person in a perfect physics simulation of the universe would feel differently enough about the qualia he or she experiences to notice a difference. Qualia are probably a physical phenomenon, yes -- but if that physical phenomenon is simulated in exact detail, how can a simulated person tell the difference? Feelings about qualia are themselves qualia, and those qualia are also simulated by the physics simulator. Imagine for a moment, that some super...
Qualia or not, it seems a straightforward consequence of Tegmark's mathematical universe hypothesis that we derive significant proportions of our measure from simulators.
that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed -- what is your justification for believing this?
I think the quantum physics sequences basically argues this in a really roundabout way.
...The answer is that I know my qualia are right because they make sense. Qualia are not pure "outputs": they feed back on the rest of the world. If I step outside on a scorching summer day, then I feel hot, and this unpleasant quale causes me to go back inside, and I am able to understand and articulate this cause and effect. If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don'
reasoning about a physical phenomenon is not the same as causing a physical phenomenon. You cannot create new territory by sketching a map of it, no matter how much detail you include in your map..
But reasoning about reasoning does cause reasoning.
The aim of this post is to challenge Nick Bostrom's simulation argument by attacking the premise of substrate-independence. Quoting Bostrom in full, this premise is explained as follows:
I contend that this premise, in even its weakest formulation, is utterly, unsalvageably false.
Since Bostrom never precisely defines what a "simulator" is, I will apply the following working definition: a simulator is a physical device which assists a human (or posthuman) observer with deriving information about the states and behavior of a hypothetical physical system. A simulator is "perfect" if it can respond to any query about the state of any point or volume of simulated spacetime with an answer that is correct according to some formal mathematical model of the laws of physics, with both the query and the response encoded in a language that it is easily comprehensible to the simulator's [post]human operator. We can now formulate the substrate independence hypothesis as follows: any perfect simulator of a conscious being experiences the same qualia as that being.
Let us make a couple observations about these definitions. First: if the motivation for our hypothetical post-Singularity civilization to simulate our universe is to study it, then any perfect simulator should provide them with everything necessary toward that end. Second: the substrate independence hypothesis as I have defined it is much weaker than any version which Bostrom proposes, for any device which perfectly simulates a human must necessarily be able to answer queries about the state of the human's brain, such as what synapses are firing at what time, as well as any other structural question right down to the Planck level.
Much of the ground I am about to cover has been tread in the past by John Searle. I will explain later in this post where it is that I differ with him.
Let's consider a "hello universe" example of a perfect simulator. Suppose an essentially Newtonian universe in which matter is homogeneous at all sufficiently small scales; i.e., there are either no quanta, or quanta simply behave like billiard balls. Gravity obeys the familiar inverse-square law. The only objects in this universe are two large spheres orbiting each other. Since the two-body problem has an easy closed-form solution, it is hypothetically straightforward to program a Turing machine to act as a perfect simulator of this universe, and furthermore an ordinary present-day PC can be an adequate stand-in for a Turing machine so long as we don't ask it to make its answers precise to more decimal places than fit in memory. It would pose no difficulty to actually implement this simulator.
If you ran this simulator with Jupiter-sized spheres, it would reason perfectly about the gravitational effects of those spheres. Yet, the computer would not actually produce any more gravity than it would while powered off. You would not be sucked toward your CPU and have your body smeared evenly across its surface. In order for that happen, the simulator would have to mimic the simulated system in physical form, not merely computational rules. That is, it would have to actually have two enormous spheres inside of it. Such a machine could still be a "simulator" in the sense that I've defined the term — but in colloquial usage, we would stop calling this a simulator and instead call it the real thing.
This observation is an instance of a general principle that ought be very, very obvious: reasoning about a physical phenomenon is not the same as causing a physical phenomenon. You cannot create new territory by sketching a map of it, no matter how much detail you include in your map.
Qualia are physical phenomena. I dearly wish that this statement were uncontroversial. However, if you don't agree with it, then you can reject the simulation argument on far simpler grounds: if experiencing qualia requires a "nonphysical" "soul" or whatnot (I don't know how to make sense out of either of those words), then there is no reason to suppose that any man-made simulator is imbued with a soul and therefore no reason to suppose that it would be conscious. However, provided that you agree that qualia are physical phenomena, then to suppose that they are any kind of exception to the principle I've just stated is simply bizarre magical thinking. A simulator which reasons perfectly about a human being, even including correctly determining what qualia a human would experience, does not necessarily experience those qualia, any more than a simulator that reasons perfectly about high gravity necessarily produces high gravity.
Hence, the type of qualia that a simulator actually produces (if any) depends crucially on the actual physical form of that simulator. A machine which walks the way a human walks must have the form of a human leg. A machine which grips the way a human grips must have the form of a human hand. And a machine which experiences the way a human experiences must have the form of a human brain.
For an example of my claim, let us suppose like Bostrom does that a simulation which correctly models brain activity down to the level of individual synaptic discharges is sufficient in order model all the essential features of human consciousness. What does that tell us about what would be required in order to build an artificial human? Here is one design that would work: first, write a computer program, running on (sufficiently fast) conventional hardware, which correctly simulates synaptic activity in a human brain. Then, assemble millions of tiny spark plugs, one per dendrite, into the physical configuration of a human brain. Run a cable from the computer to the spark plug array, and have the program fire the spark plugs in the same sequence that it predicts that synapses would occur in a biological human brain. As these firings occurred, the array would experience human-like qualia. The same qualia would not result if the simulator merely computed what plugs ought to fire without actually firing them.
Alternatively, what if granularity right down to the Planck level turned out to be necessary? In that case, the only way to build an artificial brain would to be to actually build, particle-for-particle, a brain — since due to speed-of-light limitations, no other design could possibly model everything it needed to model in real time.
I think that actual requisite granularity is probably somewhere in between. The spark plug design seems too crude to work, while Planck-level correspondence is certainly overkill, because otherwise, the tiniest fluctuation in our surrounding environment, such as a .01 degree change in room temperature, would have a profound impact on our mental state.
Now, from here on is where I depart from Searle if I have not already. Consider the following questions:
Here is the answer key:
The problem with Searle is his lack of any clear answer to "What do you mean?". Most technically-minded people, myself included, think of 6–8 as all meaning something similar to 4. Personally, I think of them as meaning something even weaker than 4, and have no objection to describing, e.g., Google, or even a Bayesian spam filter, as "intelligent". Searle seems to want them to mean the same as 5, or maybe some conjunction of 4 and 5. But in counterintuitive edge cases like the Chinese Room, they don't mean anything at all until you assign definitions to them.
I am not certain whether or not Searle would agree with my belief that it is possible for a Turing machine to correctly answer questions about what qualia a human is experiencing, given a complete physical description of that human. If he takes the negative position on this, then this is a serious disagreement that goes beyond semantics, but I cannot tell that he has ever committed himself to either stance.
Now, there remains a possible argument that might seem to save the simulation hypothesis even in the absence of substrate-independence. "Okay," you say, "you've persuaded me that a human-simulator built of silicon chips would not experience the same qualia as the human it simulates. But you can't tell me that it doesn't experience any qualia. For all you or I know, a lump of coal experiences qualia of some sort. So, let's say you're in fact living in a simulation implemented in silicon. You're experiencing qualia, but those qualia are all wrong compared to what you as a carbon-based bag of meat ought to be experiencing. How would you know anything is wrong? How, other than by life experience, do you know what the right qualia for a bag of meat actually are?"
The answer is that I know my qualia are right because they make sense. Qualia are not pure "outputs": they feed back on the rest of the world. If I step outside on a scorching summer day, then I feel hot, and this unpleasant quale causes me to go back inside, and I am able to understand and articulate this cause and effect. If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don't have any particular objection to purple and the weather is not nearly severe enough to pose any serious threat to my health, I wouldn't be able to answer you or in any way connect my qualia to my actions.
So, I think I have now established that to any extent we can be said to be living in a simulation, the simulator must physically incorporate a human brain. I have not precluded the possibility of a simulation in the vein of "The Matrix", with a brain-in-a-vat being fed artificial sensory inputs. I think this kind of simulation is indeed possible in principle. However, nothing claimed in Bostrom's simulation argument would suggest that it is at all likely.
ETA: A question that I've put to Sideways can be similarly put to many other commenters on this thread. "Similar in number", i.e., two apples, two oranges, etc., is, similarly to "embodying the same computation", an abstract concept which can be realized by a wide variety of physical media. Yet, if I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved. If you believe that "embodying the same computation" is somehow a privileged concept in this regard -- that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed -- what is your justification for believing this?