The aim of this post is to challenge Nick Bostrom's simulation argument by attacking the premise of substrate-independence. Quoting Bostrom in full, this premise is explained as follows:

A common assumption in the philosophy of mind is that of substrate-independence. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.

Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall here take it as a given.

The argument we shall present does not, however, depend on any very strong version of functionalism or computationalism. For example, we need not assume that the thesis of substrate-independence is necessarily true (either analytically or metaphysically) -- just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations, including passing the Turing test etc. We need only the weaker assumption that it would suffice for the generation of subjective experiences that the computational processes of a human brain are structurally replicated in suitably fine-grained detail, such as on the level of individual synapses. This attenuated version of substrate-independence is quite widely accepted.

Neurotransmitters, nerve growth factors, and other chemicals that are smaller than a synapse clearly play a role in human cognition and learning. The substrate-independence thesis is not that the effects of these chemicals are small or irrelevant, but rather that they affect subjective experience only via their direct or indirect influence on computational activities. For example, if there can be no difference in subjective experience without there also being a difference in synaptic discharges, then the requisite detail of simulation is at the synaptic level (or higher).

I contend that this premise, in even its weakest formulation, is utterly, unsalvageably false.

Since Bostrom never precisely defines what a "simulator" is, I will apply the following working definition: a simulator is a physical device which assists a human (or posthuman) observer with deriving information about the states and behavior of a hypothetical physical system. A simulator is "perfect" if it can respond to any query about the state of any point or volume of simulated spacetime with an answer that is correct according to some formal mathematical model of the laws of physics, with both the query and the response encoded in a language that it is easily comprehensible to the simulator's [post]human operator. We can now formulate the substrate independence hypothesis as follows: any perfect simulator of a conscious being experiences the same qualia as that being.

Let us make a couple observations about these definitions. First: if the motivation for our hypothetical post-Singularity civilization to simulate our universe is to study it, then any perfect simulator should provide them with everything necessary toward that end. Second: the substrate independence hypothesis as I have defined it is much weaker than any version which Bostrom proposes, for any device which perfectly simulates a human must necessarily be able to answer queries about the state of the human's brain, such as what synapses are firing at what time, as well as any other structural question right down to the Planck level.

Much of the ground I am about to cover has been tread in the past by John Searle. I will explain later in this post where it is that I differ with him.

Let's consider a "hello universe" example of a perfect simulator. Suppose an essentially Newtonian universe in which matter is homogeneous at all sufficiently small scales; i.e., there are either no quanta, or quanta simply behave like billiard balls. Gravity obeys the familiar inverse-square law. The only objects in this universe are two large spheres orbiting each other. Since the two-body problem has an easy closed-form solution, it is hypothetically straightforward to program a Turing machine to act as a perfect simulator of this universe, and furthermore an ordinary present-day PC can be an adequate stand-in for a Turing machine so long as we don't ask it to make its answers precise to more decimal places than fit in memory. It would pose no difficulty to actually implement this simulator.

If you ran this simulator with Jupiter-sized spheres, it would reason perfectly about the gravitational effects of those spheres. Yet, the computer would not actually produce any more gravity than it would while powered off. You would not be sucked toward your CPU and have your body smeared evenly across its surface. In order for that happen, the simulator would have to mimic the simulated system in physical form, not merely computational rules. That is, it would have to actually have two enormous spheres inside of it. Such a machine could still be a "simulator" in the sense that I've defined the term — but in colloquial usage, we would stop calling this a simulator and instead call it the real thing.

This observation is an instance of a general principle that ought be very, very obvious: reasoning about a physical phenomenon is not the same as causing a physical phenomenon. You cannot create new territory by sketching a map of it, no matter how much detail you include in your map.

Qualia are physical phenomena. I dearly wish that this statement were uncontroversial. However, if you don't agree with it, then you can reject the simulation argument on far simpler grounds: if experiencing qualia requires a "nonphysical" "soul" or whatnot (I don't know how to make sense out of either of those words), then there is no reason to suppose that any man-made simulator is imbued with a soul and therefore no reason to suppose that it would be conscious. However, provided that you agree that qualia are physical phenomena, then to suppose that they are any kind of exception to the principle I've just stated is simply bizarre magical thinking. A simulator which reasons perfectly about a human being, even including correctly determining what qualia a human would experience, does not necessarily experience those qualia, any more than a simulator that reasons perfectly about high gravity necessarily produces high gravity.

Hence, the type of qualia that a simulator actually produces (if any) depends crucially on the actual physical form of that simulator. A machine which walks the way a human walks must have the form of a human leg. A machine which grips the way a human grips must have the form of a human hand. And a machine which experiences the way a human experiences must have the form of a human brain.

For an example of my claim, let us suppose like Bostrom does that a simulation which correctly models brain activity down to the level of individual synaptic discharges is sufficient in order model all the essential features of human consciousness. What does that tell us about what would be required in order to build an artificial human? Here is one design that would work: first, write a computer program, running on (sufficiently fast) conventional hardware, which correctly simulates synaptic activity in a human brain. Then, assemble millions of tiny spark plugs, one per dendrite, into the physical configuration of a human brain. Run a cable from the computer to the spark plug array, and have the program fire the spark plugs in the same sequence that it predicts that synapses would occur in a biological human brain. As these firings occurred, the array would experience human-like qualia. The same qualia would not result if the simulator merely computed what plugs ought to fire without actually firing them.

Alternatively, what if granularity right down to the Planck level turned out to be necessary? In that case, the only way to build an artificial brain would to be to actually build, particle-for-particle, a brain — since due to speed-of-light limitations, no other design could possibly model everything it needed to model in real time.

I think that actual requisite granularity is probably somewhere in between. The spark plug design seems too crude to work, while Planck-level correspondence is certainly overkill, because otherwise, the tiniest fluctuation in our surrounding environment, such as a .01 degree change in room temperature, would have a profound impact on our mental state.

Now, from here on is where I depart from Searle if I have not already. Consider the following questions:

  1. If a tree falls in the forest and nobody hears it, does it make an acoustic vibration?
  2. If a tree falls in the forest and nobody hears it, does it make an auditory sensation?
  3. If a tree falls in the forest and nobody hears it, does it make a sound?
  4. Can the Chinese Room (.pdf link) pass a Turing test administered in Chinese?
  5. Does the Chinese Room experience the same qualia that a Chinese-speaking human would experience when replying to a letter written in Chinese?
  6. Does the Chinese Room understand Chinese?
  7. Is the Chinese Room intelligent?
  8. Does the Chinese Room think?

Here is the answer key:

  1. Yes.
  2. No.
  3. What do you mean?
  4. Yes.
  5. No.
  6. What do you mean?
  7. What do you mean?
  8. What do you mean?

The problem with Searle is his lack of any clear answer to "What do you mean?". Most technically-minded people, myself included, think of 6–8 as all meaning something similar to 4. Personally, I think of them as meaning something even weaker than 4, and have no objection to describing, e.g., Google, or even a Bayesian spam filter, as "intelligent". Searle seems to want them to mean the same as 5, or maybe some conjunction of 4 and 5. But in counterintuitive edge cases like the Chinese Room, they don't mean anything at all until you assign definitions to them.

I am not certain whether or not Searle would agree with my belief that it is possible for a Turing machine to correctly answer questions about what qualia a human is experiencing, given a complete physical description of that human. If he takes the negative position on this, then this is a serious disagreement that goes beyond semantics, but I cannot tell that he has ever committed himself to either stance.

Now, there remains a possible argument that might seem to save the simulation hypothesis even in the absence of substrate-independence. "Okay," you say, "you've persuaded me that a human-simulator built of silicon chips would not experience the same qualia as the human it simulates. But you can't tell me that it doesn't experience any qualia. For all you or I know, a lump of coal experiences qualia of some sort. So, let's say you're in fact living in a simulation implemented in silicon. You're experiencing qualia, but those qualia are all wrong compared to what you as a carbon-based bag of meat ought to be experiencing. How would you know anything is wrong? How, other than by life experience, do you know what the right qualia for a bag of meat actually are?"

The answer is that I know my qualia are right because they make sense. Qualia are not pure "outputs": they feed back on the rest of the world. If I step outside on a scorching summer day, then I feel hot, and this unpleasant quale causes me to go back inside, and I am able to understand and articulate this cause and effect. If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don't have any particular objection to purple and the weather is not nearly severe enough to pose any serious threat to my health, I wouldn't be able to answer you or in any way connect my qualia to my actions.

So, I think I have now established that to any extent we can be said to be living in a simulation, the simulator must physically incorporate a human brain. I have not precluded the possibility of a simulation in the vein of "The Matrix", with a brain-in-a-vat being fed artificial sensory inputs. I think this kind of simulation is indeed possible in principle. However, nothing claimed in Bostrom's simulation argument would suggest that it is at all likely.

ETA: A question that I've put to Sideways can be similarly put to many other commenters on this thread.  "Similar in number", i.e., two apples, two oranges, etc., is, similarly to "embodying the same computation", an abstract concept which can be realized by a wide variety of physical media.  Yet, if I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved.  If you believe that "embodying the same computation" is somehow a privileged concept in this regard -- that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed -- what is your justification for believing this?

New Comment
218 comments, sorted by Click to highlight new comments since: Today at 10:47 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]13y390

Qualia are physical phenomena. I dearly wish that this statement were uncontroversial. However, if you don't agree with it, then you can reject the simulation argument on far simpler grounds: if experiencing qualia requires a "nonphysical" "soul" or whatnot (I don't know how to make sense out of either of those words), then there is no reason to suppose that any man-made simulator is imbued with a soul and therefore no reason to suppose that it would be conscious. However, provided that you agree that qualia are physical phenomena, then to suppose that they are any kind of exception to the principle I've just stated is simply bizarre magical thinking. A simulator which reasons perfectly about a human being, even including correctly determining what qualia a human would experience, does not necessarily experience those qualia, any more than a simulator that reasons perfectly about high gravity necessarily produces high gravity.

Let's replace qualia with some other phenomenon closely associated with the mind but less confusing. How about this: a poem. A really good poem, the sort of poem that we have not seen from anyone but the greatest human poets working at t... (read more)

-1dfranke13y
The poem doesn't exist -- or, depending on what word games you want to play with "exist", it exists before it's written. Before anyone can experience the poem, you need to put it into a medium: pencil, ink, smoke, whatever. Their experience of it is different depending on what medium you choose. Your thesis seems to be that when we're talking about qualia, rather than poems, that the "information" in it is all that matters. In response I refer you to the "ETA" at the bottom of my post.
4[anonymous]13y
The poem is the type, the specific instance of the poem is the token. Types do, in a sense, exist before the first token appears, but this hardly renders instances of poems different from, say, apples, or brains. Everything has a type. Apples have a type. The point remains: the type "Shakespeare's first sonnet" can be instantiated in ink or in pencil. This has nothing to do with the fact that the type "Shakespeare's first sonnet" exists before it's instantiated - because all types do (in the relevant sense). Only because these media are distinguishable. I could write the poem down in india ink, or in, say, watercolor carefully done to look exactly like india ink, and as long as the two instances of the poem are indistinguishable, the reader's experience need not be any different. How can we tell what the written poem looks like to the reader? We can ask the reader! We can ask him, "what does it look like", and on one occasion he might say, "it looks like ink", and on anther occasion he might say, "it looks like smoke". But we can do the same with the simulated person reading a simulated ink copy of Shakespeare's first sonnet. Assuming we have some way to contact him, we can ask him, "what does it look like," and he might say, "it looks like ink".
1Psychohistorian13y
Maybe. But the central experience is the same. Maybe there's a difference between experiencing consciousness as implemented on a real brain versus consciousness as implemented inside a simulator. So long as it is possible to implement consciousness in different media, simulations make sense. If you're really a simulator's subroutine and not a physical brain, you wouldn't feel the difference, because you wouldn't know the feeling of having a real brain.

A general principle: if you find that a certain premise is just so obvious that you can't reduce it any further, and yet other smart people, exposed to the same background sources, don't agree that it's obvious (or think that it's false)... that's a signal that you haven't yet reduced the premise well enough to rely on it in practical matters.

Qualia are very confusing for people to talk and think about, and so using your intuitions about them as a knock-down argument for any other conclusion is probably ill-advised.

Qualia are physical phenomena.

Yes, qualia are physical. But what does physical mean??

Physical means 'interacting with us in the simulation'.

To us, the simulated Jupiters are not physical -- they do not exert a real gravitational force -- because we are not there with them in the simulation. However, if you add a moon to your simulation, and simulate its motion towards the spheres, the simulated moon would experience the real, physical gravity of the moons.

For a moment, my intuition argued that it isn't 'real' gravity because the steps of the algorithm are so arbitrary -- there are so many ways to model the motion of the moon towards the spheres why should any one chosen way be privileged as 'real'? But then, think of it from the point of view of the moon. However the moon's position is encoded, it must move toward the spheres. Because this is hard-coded into the algorithm. From the point of view of the moon (and the spheres, incidentally) this path and this interaction is entirely immutable. This is what 'real', and what 'physical', feels like.

-1ArisKatsaris13y
That would be to grant the assumption that the moon does have a point of view. That's the issue being debated, so we can't prove it by just assuming it. To "simulate" (i.e. compute everything about) a really really simple Newtonian solar system, all we really need is knowledge of a few numbers (e.g. mass, position) and a few equations. Does writing those numbers and equations down on a paper mean that I've now created a simulated universe that has "its own point of view"? I certainly don't need a computer to simulate that system, one would be able to do the calculations of it in one's head. And the moon wouldn't even need the head doing the calculations, it would be perfectly defined by the equations and the numbers -- that would ofcourse not even need to be written down on a paper. This is then Tegmark IV: once you grant that a simulation of a thing has by necessity its own point of view, then that simulation doesn't need any physical component, it's sustained by the math alone.
1byrnema13y
Oops, I didn't mean that the moon should have a point of view. I find it natural to use anthropomorphisms such as these, but don't intend them literally. Yes, this made me pause. Even while simulating the motion of a moon towards the spheres, there are so many abstract ways to model the moon's position, could they all be equally real? (In which case, each time you simulate something quite concretely, how many abstract things have you unintentionally made real?) But then I decided that even if 'position' and 'motion' are quite abstract, it is real ... though now I have trouble describing why without using a concept like, "from the moon's point of view" or "if the moon observes" which means I was packing something into that. I should think about this more. Perhaps. I'm not sure. The idea that all mathematical possibilities are real is intriguing (I saw this with the Ultimate Ensemble) theory here, but I have a doubt that I will describe here. It seems to be the case, in this universe anyway, that things need to be causally entangled in order to be real. So setting up a simulation in which a moon is a position on a lattice that moves toward another position on a lattice would model 'real' motion because the motion is the causal result of the lines of code you wrote. However, there are cases when things are not causally entangled and then they are not real. Consider the case of mental thoughts. I can imagine something that is not real: A leprechaun throws a ball up in the air and it stays up. Of course, my thought are real, and are causally entangled with my neurons. But the two thoughts 'he throws a ball up' and 'it stays up' are not themselves causally entangled. They are just sequential and connected by the word 'and'. I have not created a world where there is no gravity. This is reassuring, since I can also imagine mathematical impossibilities, like a moebius strip in 2D or something inconsistent before I'm aware of the inconsistency.

Mostly, discussions of this subject always feel to me like an exercise in redirecting attention, like doing stage magic.

Some things are computations, like calculating the product of 1234 and 5678. Computations are substrate-independent.

I am willing to grant that what the mass of Jupiter does when I'm attracted to it is not a mere computation. (I gather that people like Tegmark would disagree, but I don't even really understand what they mean by their disagreement.)

I certainly agree that if what my brain does when I experience something is not a mere computation, then the idea of instantiating that computation on a different substrate is incoherent and certainly does not reproduce what my brain does. (I don't care whether we call that thing "consciousness" or "qualia" or "pinochle.")

From that point on, it just seems that people build elaborate rhetorical structures to shift people's intuitions to the "my brain is doing something more like calculating the product of 1234 and 5678" or the "my brain is doing something more like exerting gravitational attraction on the moons of Jupiter" side.

Personally, I'm on the "more like calcu... (read more)

This proves that we cannot be in a simulation by... assuming we are not in a simulation.

Even granting you all of your premises, everything we know about brains and qualia we know by observing it in this universe. If this universe is in fact a simulation, then what we know about brains and qualia is false. At the very most, your argument shows that we cannot create a simulation. It does not prove that we cannot be in a simulation, because we have no idea what the physics of the real world would be like.

I'm also rather unconvinced as to the truth of your premises. Even if qualia are a phenomenon of the physical brain, that doesn't mean you can't generate a near-identical phenomenon in a different substrate. In general, John Searle has some serious problems when it comes to trying to answer essentially empirical questions with a priori reasoning.

-1dfranke13y
Like pjeby, you're attacking a claim much stronger than the one I've asserted. I didn't claim we cannot be in a simulation. I claimed that if we are in a simulation, then the simulator must be of a sort that Bostrom's argument provides us no reason to suppose is likely to exist. There's nothing wrong with trying to answer empirical questions with deductive reasoning if your priors are well-grounded. Deductive logic allows me to reliably predict that a banjo will fall if I drop it, even if I have never before observed a falling banjo, because I start with the empirically-acquired prior that, in general, dropped objects fall.

I didn't claim we cannot be in a simulation.

Then the title, "We are not living in a simulation" was rather poorly chosen.

Deductive logic allows me to reliably predict that a banjo will fall if I drop it, even if I have never before observed a falling banjo, because I start with the empirically-acquired prior that, in general, dropped objects fall.

Observation gives you, "on Earth, dropped objects fall." Deduction lets you apply that to a specific hypothetical. You don't have observation backing up the theory you advance in this article. You need, "Only biological brains can have qualia." You have, "Biological brains have qualia." Big difference.

Ultimately, it seems you're trying to prove a qualified universal negative - "Nothing can have qualia, except biological brains (or things in many respects similar)." It is unbelievably difficult to prove such empirical claims. You'd need to try really hard to make something else have qualia, and then if you failed, the most you could conclude is, "It seems unlikely that it is possible for non-biological brains to have qualia." This is what I mean when I disparage Searle; many of his claims require mountains of evidence, yet he thinks he's resolved them from his armchair.

-3dfranke13y
These things are not identical.
0Cyan13y
So you would assert that we can be in a simulation, but not living in it...?
-2dfranke13y
Try reading it as "the probability that we are living in a simulation is negligibly higher than zero".
1Cyan13y
I tried it. It didn't help. No joke -- I'm completely confused: the referent of "it" is not clear to me. Could be the apparent contradiction, could be the title... Here's what I'm not confused about: (i) your post only argues against Bostrom's simulation argument; (ii) it seems you also want to defend yourself against the charge that your title was poorly chosen (in that it makes a broader claim that has misled your readership); (iii) your defense was too terse to make it into my brain.
3ArisKatsaris13y
dfranke means, I think, that he considers being in a simulation possible, but not likely. Statement A) "We are not living in a simulation": P(living in a simulation) < 50% Statement B) "We cannot be in a simulation": P(living in a simulation) ~= 0% dfranke believes A, but not B.
5dfranke13y
No, rather: A) "We are not living in a simulation" = P(living in a simulation) < ε. B) "we cannot be living in a simulation" = P(living in a simulation) = 0. I believe A but not B. Think of it analogously to weak vs. strong atheism. I'm a weak atheist with respect to both simulations and God.
0Cyan13y
Ah, got it. Thanks.
0CuSithBell13y
That may be dfranke's intent, but categorically stating something to be the case generally indicates a much higher confidence than 50%. ("If you roll a die, it will come up three or higher.")
0Cyan13y
Thanks.
-1[anonymous]13y
That I agree with, though not for reasons brought up here.
7Cyan13y
Then it's from your title that people might get the impression you're making a stronger claim than you mean to be.

If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don't have any particular objection to purple and the weather is not nearly severe enough to pose any serious threat to my health, I wouldn't be able to answer you or in any way connect my qualia to my actions.

But in the simulation, you WOULD have an objection to purple, and you would call purple "hot", right? Or is this some haywire simulation where the simulated people act normally except they're completely baffled as to why they're doing any of it? Either what you're saying is incredibly stupid, or I don't understand it. Wait, does that mean I'm in a simulation?

-3dfranke13y
Yes. A simulation in which people experienced one sort of qualia but behaved as though they were experiencing another would go completely haywire.
2DSimon13y
This doesn't seem right. If they experience qualia A but react exactly as though they were experiencing qualia B... how's that practically different from just experiencing qualia B? You might be able to tell the difference between the two qualia by somehow arranging to experience both subjective points of view through a telepathy machine or something. However, considering a single individual's viewpoint and actions, if they get "purple" when it's too hot outside and stop being "purple" when they went somewhere cool, then the person's actions are the same as if they were avoiding the qualia "hot" or "sour" or "flarglblargl", and the system doesn't go haywire at all.

Run a cable from the computer to the spark plug array, and have the program fire the spark plugs in the same sequence that it predicts that synapses would occur in a biological human brain. As these firings occurred, the array would experience human-like qualia. The same qualia would not result if the simulator merely computed what plugs ought to fire without actually firing them.

This would imply that qualia are epiphenomenal. If so, and when people talk about qualia they are accurately reporting them, without the epiphenomenal qualia causing the accurate report, where does that improbability come from?

-1dfranke13y
I don't understand why you think it would imply that. The claims in my second-to-last paragraph clearly imply that they are not epiphenomenal. Where have I contradicted myself?

The idea is that If you were simulated on that computer and someone asked you to describe your qualia, you could do it perfectly - despite having not qualia! This is a bit magical.

4JGWeissman13y
The simulation does not receive any feedback from the spark plugs, and so, within the simulation, everything is the same whether the spark plugs are there or not, so the qualia are (only) in the spark plugs, the simulation does the same thing whether the qualia exist or not, i.e. the qualia have no causal effects on the simulation, which is what I mean by saying they are epiphenomenal.
-1dfranke13y
The spark doesn't have any effect on the simulator, but that doesn't mean that the simulator can't predict in advance what effect that spark would have if it occurred inside a brain and reason accordingly. You seem to be implying that the simulator can't determine what effect the spark (and its resulting qualia) would have before the spark actually occurs. This isn't the case for any other physical phenomenon -- I don't have to let go of a ball in mid-air to predict that it will fall -- so why would you suppose it to be true of qualia?
1JGWeissman13y
The simulator can make that prediction and apply the results within the simulation even if it is not connected to the spark plugs. No, I am implying that since you can make the prediction, the actual spark isn't important.
0dfranke13y
Why is this different from the claim that because you can make the prediction of what gravitational field a massive sphere will produce, the actual sphere isn't important?
3JGWeissman13y
Within the simulation, having an actual sphere is not important, the simulator applies the same prediction to the simulator either way. If you care about effects outside the simulation, then you would need an outside-the-simulation sphere to gravitationally attract objects outside the simulation, in the same way that you would need to report a simulated person's musings about their own qualia (or other reactions to their own qualia) to me outside the simulation for their qualia to affect me in the same way I would affected by similar musings (or other reactions) of people outside the simulation that I learn about.
0dfranke13y
I think I can justly paraphrase you as follows: If this paraphrasing is accurate, then I ask you, what does "occurring inside the simulation mean"? What is the physical locus at which the gravity and qualia are happening? I see two reasonable answers to this question: either, "at the simulator", or "nowhere". In the former case, I refer you back to my previous reply. In the latter case, you concede that neither the gravity nor the qualia are real.

Your position within our universe is giving you a bias toward one side of a mostly symmetrical situation.

Let's throw out the terms "real" and "simulated" universe and call them the "parent" and "child" universe.

Gravity in the child universe doesn't affect the parent universe, true; creating a simulation of a black hole doesn't suck the simulating computer into the event horizon. But gravity in the parent universe doesn't affect the child universe either - if I turn my computer upside-down while playing SimCity, it doesn't make my Sims scream and start falling into the sky as their city collapses around them. So instead of saying "simulated gravity isn't real because it can't affect the real universe", we say "both the parent and child universes have gravity that only acts within their own universe, rather than affecting the other."

Likewise, when you say that you can't point to the location of a gravitional force within the simulation so it must be "nowhere" - balderdash. The gravitational force that's holding Sim #13335 to the ground in my SimCity game is happening on Oak Street, right between the park and the co... (read more)

8JGWeissman13y
This calls for a port of SimCity to a mobile device with an accelerometer.
2[anonymous]13y
Simcity has been ported to a mobile device with an accelerometer. No, I don't think it uses it (at least, not in that way).
3TheOtherDave13y
This is a digression, but... I'm not sure it actually makes sense to claim that what holds Sim #1335 to the ground is a gravitational force, any more than it would make sense to say that what holds an astronaut connected to the outside of their shuttle via magnetic boots is a gravitational force. What it is, exactly, I don't know -- I haven't played SimCity since the early 90s, and have no sense of how it behaves or operates. But I'd be really surprised if it were something that, if I found myself in that universe having my memories, I'd be inclined to call gravitation.
0wnoise13y
For the Sims, yes, I'd agree. For a more physical based simulation, I would not.
0TheOtherDave13y
(nods) As I say, it's a digression.
2Sniffnoy13y
In addition it should probably be pointed out that real things in general don't need to have a location. I think we can all agree that the electromagnetic field is real, e.g., but the question "Where is the electromagnetic field?" is nonsense.
0wnoise13y
The question itself is not quite nonsense. There is a perfectly reasonable answer of "everywhere". It's just not a particularly useful question, and this is because of the hidden assumptions behind it, which are wrong and can easily lead to nonsense questions. "What is the value of the electromagnetic field at X?" is a much more interesting question that can be asked once those incorrect assumptions are removed and replaced.
2Sniffnoy13y
Eh. You can force an answer in English, sure, but it's still not really the "right" answer. The electromagnetic field is a function from spacetime, to, uh, some sort of tangent bundle on it or something? My knowledge of how to formalize this sort of thing isn't so great. My point is that it's a function taking spacetime locations as inputs; it doesn't really have a location itself any more than, say, the metric of spacetime does. When we say "it's everywhere" what's meant is something more like "it's defined everywhere" or "at every location, it affects things".
0wnoise13y
The EM field is used both for the function, and the values of that function. (I think it's actually a skew-symmetric linear operator on the tangent space T_x M at a given point. This can be phrased in terms of a bivector at that point. A "bundle" TM = Union_x T_x M talks about an extended manifold connecting tangent spaces at a different points.) I think it's entirely reasonable in common language to use "where" to mean "where it's non-negligible". Consider that physical objects are also fields. It's entirely reasonable to ask "where an electron is" even though the electron field is a function of spacetime. Once we're able to ask the right questions, this becomes a less-useful question, as it only applicable in cases where the field is concentrated. The EM field case just breaks down much sooner.
1dfranke13y
The claim that the simulated universe is real even though its physics are independent of our own seem to imply a very broad definition of "real" that comes close to Tegmarck IV. I've posted a followup to my article to the discussion section: Eight questions for computationalists. Please to reply to it so I can better understand your position.
0JGWeissman13y
Not quite. Where as with the simulation of the sphere you need to an actual sphere or equivalent mass to produce the simulated effect outside the simulation, with a simulated person you need only the simulated output of the person, not the person (or its physical components) itself, to have the same effect outside the simulation as the output of a person from outside the simulation. The improbability of having a philosophy paper copied from with the simulation that describes qualia is explained by the qualia within the simulation.

The reason we think intelligence is substrate-independent is that the properties we're interested in (the ones we define to constitute "intelligence") do not make reference to any substrate. Can a simulation of a brain design a aeroplane? Yes. Can a simulation of a brain prove Pythagoras' theorem? Yes. Can a simulation of a brain plan strategically in the presence of uncertainty? Yes. These are the properties we mean when we say "intelligence". Under a different definition for "intelligence" that stipulates "composed of neurons" or "looks grey and mushy", intelligence is not substrate-independent. It's just a word game.

5TheOtherDave13y
Well, that's not true for everyone here, I suspect. Eliezer, for example, does seem very concerned with whether the optimization process that gets constructed (or, at least, the process he constructs) has some attribute that is variously labelled by various people as "is sentient," "has consciousness," "has qualia," "is a real person," etc. Presumably he'd be delighted if someone proved that a simulation of a human created by an AI can't possibly be a real person because it lacks some key component that mere simulations cannot have. He just doesn't think it's true. (Nor do I.)
-1dfranke13y
I can't figure out whether you're trying to agree with me or disagree with me. You comment sounds argumentative, yet you seem to be directly paraphrasing my critique of Searle.

If I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved. If you believe that "embodying the same computation" is somehow a privileged concept in this regard -- that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed -- what is your justification for believing this?

The two apples in the head of your strawman have the same cardinality as the two hemispheres of your brain, but what ... (read more)

0dfranke13y
I don't think this is quite fair. The concept that medieval philosophers were missing was analytic philosophy, not cathode rays. If the works of Quine and Popper and Wittgenstein fell through a time warp, it'd be plausible that medieval philosophers could have made legitimate headway on such a question.
5Perplexed13y
I sincerely don't understand what you are saying here. The most natural parsing is that a medieval philosopher could come to terms with the concept of a disembodied talking head, if only he read some Quine, Popper, and Wittgenstein first. Yet, somehow, that interpretation seems uncharitable. If you are instead suggesting that the schoolmen would be able to understand Quine, Popper, and Wittgenstein, should their works magically be transmitted back in time, then I tend to agree. But I don't think of this 'timeless quality' as a point recommending analytic philosophy.

The interpretation that you deem uncharitable is the one I intended.

Community: clarifications like this are vital, and to be encouraged. Please don't downvote them.

8dfranke13y
The guy who downvoted that one downvoted all the rest of my comments in this thread at the same time. Actually, he downvoted most of them earlier, then picked that one up in a second sweep of those comments that I had posted since he did his first pass. So, your assumption that the downvote had anything to do with the content of that particular comment is probably misguided.
2thomblake13y
Where do you get such specific information about those who vote on your comments?
6dfranke13y
I just hit reload at sufficiently fortuitous times that I was able to see all my comments drop by exactly one point within a minute or so of each other, then later see the same thing happen to exactly those comments that it didn't happen to before.
0[anonymous]13y
I downvoted most of your comments in this thread too, for what it is worth. With very few exceptions I downvote all comments and posts advocating 'qualia'. Because qualia are stupid, have been discussed here excessively and those advocating them tend to be completely immune to reason. Most of the comments downvoted by this heuristic happen to be incidentally worth downvoting based on individual (lack of) merit.
1shokwave13y
However, wnoise's comment scored the grandparent an upvote from me, and possibly from others too!
-13AstroCJ13y
6Perplexed13y
OK, then. It seems we have another example of the great philosophical principle YMMV. My own experience with analytic philosophy is that it is not particularly effective in shutting down pointless speculation. I would have guessed that the schoolmen would have been more enlightened and satisfied by an analogy than by anything they might find in Quine. "The talking head," I would explain, "is like an image seen in a reflecting pool. The image feels no pain, nor is it capable of independent action. The masters, from which the image is made are a whole man and woman, not disembodied heads. And the magic which transfers their image to the box does no more harm to the originals than would ripples in a reflecting pool."
0dfranke13y
Oh, certainly not. Not in the least. Think of it this way. Pre-analytic philosophy is like a monkey throwing darts at a dartboard. Analytic philosophy is like a human throwing them. There's no guarantee that he'll hit the board, much less the bullseye, but at least he understands where he's supposed to aim.

A human brain is a computer. The brain of a living human differs from a dead one in that it's running a program. If the universe is as it seems, running a program on a computer causes qualia.

If the simulation hypothesis is true, human brains are still programs; they're just running on different computers.

Unless you have some reason qualia would be more likely to occur when the program is run on one of those computers than the other, you have no evidence about the simulation hypothesis.

I'm confused about exactly what qualia are, but I feel reasonably sure that they are related to information processing in somewhat the same way that high gravity and things moving through space are related. Substrate independence of qualia immediately follows from this point of view without any need to assert that qualia are not physical phenomena.

The original post takes the trouble to define "simulation" but not "qualia". The argument would make much more sense to me if it offered a definition of "qualia" precise enough to determine whether a simulated being does or does not have qualia, since that's the crux of the argument. I'm not aware of a commonly accepted definition that is that precise.

As it stands, I had to make sure as I was reading it to keep in mind that I didn't know what the author meant by "qualia", and after discarding all the statements using that undefined term the remainder didn't make much sense.

6[anonymous]13y
By its very concept, only the person himself can actually observe his own qualia. Qualia are defined that way - at least, in any serious treatment that I've seen (aside, of course, from the skeptical and deprecatory ones). This is one of the key elements that make them a problematic concept. Consciousness - as conceived by many philosophers - is also defined that way. Hence the "other minds problem" - which is the problem that only the person himself can "directly" observe his own consciousness, and other people can at best infer from his behavior, from his similarity to them, etc., that he has consciousness. So both the concepts of consciousness and of qualia are defined in a way that makes them - by definition - problematic. As far as I know you're not going to get any qualia believer to define qualia in a way that allows you, a third-party observer, to look at something with a microscope, or telescope, or MRI, or with any other instrument real or physically possible, and personally witness that it has qualia, because qualia are by their nature, or rather by their definition, "directly" perceptible only to the person who has them. In contrast, the neurons in my brain are no more "directly" perceptible to me than to you. I use the machinery of my brain, but I don't perceive it. You have as good access to it, in principle, as I do. If you're my neurosurgeon, then you are a better witness of the material of my brain than I am. This does not hold for qualia. This is one of the properties of qualia - and indeed of the concept of consciousness as understood by many philosophers - that I find sufficiently faulty as to warrant rejection.

If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don't have any particular objection to purple and the weather is not nearly severe enough to pose any serious threat to my health, I wouldn't be able to answer you or in any way connect my qualia to my actions.

If a simulation allowed life to evolve within it, and was not just an attempt to replicate something which already exists, would... (read more)

7Kevin13y
Yes, dfranke's argument seems to map to "we are not living in a simulation because we are not zombies and people living in a simulation are zombies".
0dfranke13y
s/are not zombies/have qualia/ and you'll get a little more accurate. A zombie, supposing such a thing is possible (which I doubt for all the reasons given in http://lesswrong.com/lw/p7/zombies_zombies ), is still a real, physical object. The objects of a simulation don't even rise to zombie status.
4Jonathan_Graehl13y
It's really unclear what you mean by 'zombie', 'real, physical object', and 'objects of a simulation'. But you're right that Kevin meant by 'zombie' exactly 'us without qualia'. I thought this was obvious in context.
2Kevin13y
What is a physical object?
0jsalvatier13y
If you are not arguing for zombies, I am really confused about what you're trying to argue for.
0[anonymous]13y
...unless you believe in zombies!

I will do a more in-depth reading when I have time, but from a quick skim:

If you're basing your argument against a simulation universe on qualia, what do you say to those of us who reject qualia?

3dfranke13y
I can think of three, maybe more, ways to unpack the phrase "reject qualia": 1. "Qualia are not a useful philosophical concept. The things you're trying to refer to when you say 'qualia' are better understood in different terms that will provide greater clarity". 2. "Qualia don't exist. The things you're trying to refer to when you say 'qualia' are figments of your imagination." 3. "The very notion of qualia is inconceivable. It's like talking about dry water." Please clarify what you mean.
4shokwave13y
I mean #2 precisely. That is, qualia - the universalised experience of 'redness', of fundamental experience, or what-have-you - is a category which we dump neural firing patterns into. At the level of patterns in the brain physiology, there are only patterns, and some patterns are isomorphic to each other - that is, a slightly different pattern in a slightly different architecture nevertheless builds up to the same higher-level result. It is a figment of your imagination because that's an easy shortcut that our brains take. In attempting to communicate ideas - to cause isomorphic patterns to arise in the other's brain - our brains may tend to create a common cause, an abstract concept that both patterns are derived from. There isn't any such platonic concept! There's just the neural firing in my head (completely simulable on a computer, no human brain needed) and the neural firing in your head (also completely simulable, no brain needed). There's nothing that, in essence, requires a human brain involved in doing the simulating, at any point. Hmm. Qualia's come up a few times on LessWrong, and it seems like a nonzero portion of the comments accept it. I'll have to go through the literature on qualia to build a more thorough case against it. Look forward to a "No Qualia" post sometime soon edit: including baseless speculation on why talking about it is so confusing! - unless, in going through the literature, I change my mind about whether qualia exist.
-1[anonymous]13y
2 and 3 seem a little extreme. 1 seems about right. I am particularly sympathetic to Gary Drescher's account.
-1dfranke13y
I find Drescher's account of computationalism even more nonsensical than most others. Here's why. Gensyms are a feature that are mostly exclusive to Lisp. At the machine level, they're implemented as pointers, and there you can do other things with them besides test them for equality: you can dereference them, do pointer arithmetic, etc. Surely, if you're going to compare qualia to some statement about computation, it needs to be a statement that can be expressed independently of any particular model of it. All that actually leaves you, you'll find, is functions, not algorithms. You can write "sort" on anything Turing-equivalent, but there's no guarantee that you can write "quicksort".
2[anonymous]13y
I'm trying to understand your objection, but it seems like a quibble to me. You seem to be saying that the analogy between qualia and gensyms isn't perfect because gensyms are leaky abstractions. But I don't think it has to be to convey the essential idea. Analogies rarely are perfect. Here's my understanding of the point. Let's say that I'm looking at something, and I say, "that's a car". You ask me, "how do you know it's a car?" And I say, "it's in a parking lot, it looks like a car..." You say, "and what does a car look like?" And maybe I try to describe the car in some detail. Let's say I mention that the car has windows, and you ask, "what does a window look like". I mention glass, and you ask, "what does glass look like". We keep drilling down. Every time I describe something, you ask me about one of the components of the description. This can't go on forever. It has to stop. It stops somewhere. It stops where I say, "I see X", and you ask, "describe X", and I say, "X looks like X" - I'm no longer able to give a description of the thing in terms of component parts or aspects. I've reached the limit. There has to be a limit, because the mind is not infinite. There have to be things which I can perceive, which I can recognize, but which I am unable to describe - except to say that they look like themselves, that I recognize them. This is unavoidable. Create for me any AI that has the ability to perceive, and we can drill down the same way with that AI, finally reaching something about which the AI says, "I see X", and when we ask the AI what X looks like, the AI is helpless to say anything but, "it looks like X". Any finite creature (carbon or silicon) that can perceive, has some limit, where it can perceive a thing, but can't describe it except to say that it looks like itself. The creature just knows, it clearly sees that thing, but for the life of it, it can't give a description of it. But since the creature can clearly see it, the creature can say that i
1dfranke13y
But qualia are not any of those things! They are not epiphenomenal! They can be compared. I can classify them into categories like "pleasant", "unpleasant" and "indifferent". I can tell you that certain meat tastes like chicken, and you can understand what I mean by "taste", and understand the gist of "like chicken" even if the taste is not perfectly indistinguishable from that of chicken. I suppose that I would be unable to describe what it's like to have qualia to something that has no qualia whatsoever, but even that I think is just a failure of creativity rather than a theoretical impossibility -- [ETA: indeed, before I could create a conscious AI, I'd in some sense have to figure out how to provide exactly such a description to a computer.]
0TheOtherDave13y
I apologize if this is recapitulating earlier comments -- I haven't read this entire discussion -- and feel free to point me to a different thread if you've covered this elsewhere, but: on your view, could a simulation of me in a computer classify the things that it has (which, on your view, cannot be actual qualia) into categories like "pleasant" and "unpleasant" and "indifferent"? Could it tell me that certain (simulations of) meat tastes like chicken, and if it did, could I understand what it meant by "taste" and understand the gist of "like chicken"? If not, then on your view, what would actually happen instead, if it tried? (Or, if trying is another thing that can't be a computation, then: if it simulated me trying?) If so, then on your view, how can any of those operations qualify as comparing qualia?
0dfranke13y
I'm not certain what you mean by "could a simulation of me do X". I'll read it as "could a simulator of me of do X". And my answer is yes, a computer program could make those judgements without actually experiencing any of those qualia, just like it could make judgements about what trajectory the computer hardware would follow if it were in orbit around Jupiter, without it having to actually be there.

a computer program could make those judgements (sic) without actually experiencing any of those qualia

Just as an FYI, this is the place where your intuition is blindsiding you. Intuitively, you "know" that a computer isn't experiencing anything... and that's what your entire argument rests on.

However, this "knowing" is just an assumption, and it's assuming the very thing that is the question: does it make sense to speak of a computer experiencing something?

And there is no reason apart from that intuition/assumption, to treat this as a different question from, "does it make sense to speak of a brain experiencing something?".

IOW, substitute "brain" for every use of "computer" or "simulation", and make the same assertions. "The brain is just calculating what feelings and qualia it should have, not really experiencing them. After all, it is just a physical system of chemicals and electrical impulses. Clearly, it is foolish to think that it could thereby experience anything."

By making brains special, you're privileging the qualia hypothesis based on an intuitive assumption.

-4dfranke13y
I don't think you read my post very carefully. I didn't claim that qualia are a phenomenon unique to human brains. I claimed that human-like qualia are a phenomenon unique to human brains. Computers might very well experience qualia; so might a lump of coal. But if you think a computer simulation of a human experiences the same qualia as a human, while a lump of coal experiences no qualia or different ones, you need to make that case to me.
5pjeby13y
Actually, I'd say you need to make a case for WTF "qualia" means in the first place. As far as I've ever seen, it seems to be one of those words that people use as a handwavy thing to prove the specialness of humans. When we know what "human qualia" reduce to, specifically, then we'll be able to simulate them. That's actually a pretty good operational definition of "reduce", actually. ;-) (Not to mention "know".)
2TheOtherDave13y
Sure, ^simulator^simulation preserves everything relevant from my pov. And thanks for the answer. Given that, I really don't get how the fact that you can do all of the things you list here (classify stuff, talk about stuff, etc.) should count as evidence that you have non-epiphenomenal qualia, which seems to be what you are claiming there. After all, if you (presumed qualiaful) can perform those tasks, and a (presumed qualialess) simulator of you also can perform those tasks, then the (presumed) qualia can't play any necessary role in performing those tasks. It follows that those tasks can happen with or without qualia, and are therefore not evidence of qualia and not reliable qualia-comparing operations. The situation would be different if you had listed activities, like attracting mass or orbiting around Jupiter, that my simulator does not do. For example, if you say that your qualia are not epiphenomenal because you can do things like actually taste chicken, which your simulator can't do, that's a different matter, and my concern would not apply. (Just to be clear: it's not obvious to me that your simulator can't taste chicken, but I don't think that discussion is profitable, for reasons I discuss here.)
-1dfranke13y
You haven't responded to the broader part of my point. If you want to claim that qualia are computations, then you either need to specify a particular computer architecture, or you need to describe them in a way that's independent of any such choice. In the the first case, then the architecture you want is probably "the universe", in which case you're defining an algorithm by specifying its physical implementation and you've affirmed my thesis. In the latter case, all you get to talk about is inputs and outputs, not algorithms.
3[anonymous]13y
You seem to be mixing up two separate arguments. In one argument I am for the sake of argument assuming the unproblematic existence of qualia and arguing, under this assumption, that qualia are possible in a simulation and therefore that we could (in principle) be living in a simulation. In the other argument (the current one) I simply answered your question about what sort of qualia skeptic I am. So, in this argument, the current one, I am continuing the discussion where, in answer to your question, I have admitted to being a qualia skeptic more or less along the lines of Drescher and Dennett. This discussion is about my skepticism about the idea of qualia. This discussion is not about whether I think qualia are computations. It is about my skepticism. Similarly, if I were admitting to skepticism about Santa Claus, it would not be an appropriate place to argue with me about whether Santa is a human or an elf. Maybe you are basing your current focus on computations on Drescher's analogy with Lisp's gensyms. That's something for you to take up with Drescher. By now I've explained - at some length - what it is that resonated with me in Drescher's account and why. It doesn't depend on qualia being computations. It depends on there being a limit to perception.
1dfranke13y
On further reflection, I'm not certain that your position and mine are incompatible. I'm a personal identity skeptic in roughly the same sense that you're a qualia skeptic. Yet, if somebody points out that a door is open when it was previously closed, and reasons "someone must have opened it", I don't consider that reasoning invalid. I just think the need to modify the word "someone" if they want to be absolutely pedantically correct about what occurred. Similarly, your skepticism about qualia doesn't really contradict my claim that the objects of a computer simulation would have no (or improper ) qualia; at worst it means that I ought to slightly modify my description of what it is that those objects wouldn't have.
1dfranke13y
Ok, I've really misunderstood you then. I didn't realize that you were taking a devil's advocate position in the other thread. I maintain the arguments I've made in both threads in challenge to all those commenters who do claim that qualia are computation.

the type of qualia that a simulator actually produces (if any) depends crucially on the actual physical form of that simulator.... [to simulate humans] the simulator must physically incorporate a human brain.

It seems like the definition of "physical" used in this article is "existing within physics" (a perfectly reasonable definition). By this definition, phenomena such as qualia, reasoning, and computation are all "physical" and are referred to as such in the article itself.

Brains are physical, and local physics seems Tu... (read more)

-1dfranke13y
You're continuing to confuse reasoning about a physical phenomenon with causing a physical phenomenon. By the Church-Turing thesis, which I am in full agreement with, a Turing machine can reason about any physical phenomenon. That does not mean a Turing machine can cause any physical phenomenon. A PC running a program which reasons about Jupiter's gravity cannot cause Jupiter's gravity.
0kurokikaze13y
From inside the simulation, the simulation "reasoning" about phenomenon cannot be distincted from actually causing this phenomenon. From my point of view, gravity inside two-body simulator is real for all bodies inside the simulator. If you separate "reasoning" from "happening" only because you are able to tell one from another from your point of view, why don't we say that all working of our world can be "reasoning" instead of real phenomena if there are entities that can separate its "simulated working" from their "real" universe?
0ArisKatsaris13y
For a two body simulator we can just use the Newtonian equation for F = G m1m2 / (r^2), right? You aren't claiming we need any sort of computing apparatus to make gravity real for "all bodies inside the simulator"?
-1kurokikaze13y
I don't get the question, frankly. Simulation, in my opinion, is not a single formula but the means of knowing the state of system at particular time. In this case, we need an "apparatus", even if it's only a piece of paper, crayon and our own brain. It will be a very simple simulator, yes.
1ArisKatsaris13y
Basically I'm asking: is gravity "real for all bodies inside the system" or "real for all bodies inside the simulator"? If the former, then we have Tegmark IV. If ONLY the latter, then you're saying that a system requires a means to be made known by someone outside the system, in order to have gravity "be real" for it. That's not substrate independence; we're no longer talking about its point of view, as it only becomes "real" when it informs our point of view, and not before.
-1kurokikaze13y
Oh, I got what you mean by "Tegmark IV" here from your another answer. Then it's more complicated and depends on our definition of "existance" (there can be many, I presume).
-1kurokikaze13y
I think gravity is "real" for any bodies that it affects. For the person running the simulator it's "real" too, but in some other sense — it's not affecting the person physically but it produces some information for him that wouldn't be there without the simulator (so we cannot say they're entirely causally disconnected). All this requires further thinking :) Also, english is not my main language so there can be some misunderstanding on my part :)
0kurokikaze13y
Okay, I had pondered this question for some time and the preliminary conclusions are strange. Either "existance" is physically meaningless or it should be split to at least three terms with slightly different meanings. Or "existance" is purely subjective things and we can't meaningfully argue about "existance" of things that are causally disconnected from us.
0Sideways13y
I'm asserting that qualia, reasoning, and other relevant phenomena that a brain produces are computational, and that by computing them, a Turing machine can reproduce them with perfect accuracy. I apologize if this was not clear. Adding two and two is a computation. An abacus is one substrate on which addition can be performed; a computer is another. I know what it means to compute "2+2" on an abacus. I know what it means to compute "2+2" on a computer. I know what it means to simulate "2+2 on an abacus" on a computer. I even know what it means to simulate "2+2 on a computer" on an abacus (although I certainly wouldn't want to have to actually do so!). I do not know what it means to simulate "2+2" on a computer.
-3dfranke13y
You simulate physical phenomena -- things that actually exist. You compute combinations of formal symbols, which are abstract ideas. 2 and 4 are abstract; they don't exist. To claim that qualia are purely computational is to claim that they don't exist.
1Sideways13y
"Computation exists within physics" is not equivalent to " "2" exists within physics." If computation doesn't exist within physics, then we're communicating supernaturally. If qualia aren't computations embodied in the physical substrate of a mind, then I don't know what they are.
0dfranke13y
Computation does not exist within physics, it's a linguistic abstraction of things that exist within physics, such as the behavior of a CPU. Similarly, "2" is an abstraction of a pair of apples, a pair of oranges, etc. To say that the actions of one physical medium necessarily has a similar physical effect (the production of qualia) as the actions of another physical medium, just because they abstractly embody the same computation, is analagous to saying that two apples produce the same qualia as two oranges, because they're both "2". This is my last reply for tonight. I'll return in the morning.
7Sideways13y
If computation doesn't exist because it's "a linguistic abstraction of things that exist within physics", then CPUs, apples, oranges, qualia, "physical media" and people don't exist; all of those things are also linguistic abstractions of things that exist within physics. Physics is made of things like quarks and leptons, not apples and qualia. I don't think this definition of existence is particularly useful in context. As to your fruit analogy: two apples do in fact produce the same qualia as two oranges, with respect to number! Obviously color, smell, etc. are different, but in both cases I have the experience of seeing two objects. And if I'm trying to do sums by putting apples or oranges together, substituting one for the other will give the same result. In comparing my brain to a hypothetical simulation of my brain running on a microchip, I would claim a number of differences (weight, moisture content, smell...), but I hold that what makes me me would be present in either one. See you in the morning! :)
0dfranke13y
Not quite reductionist enough, actually: physics is made of the relationship rules between configurations of spacetime which exist independently of any formal model of them that give us concepts like "quark" and "lepton". But digging deeper into this linguistic rathole won't clarify my point any further, so I'll drop this line of argument. If you started perceiving two apples identically to the way you perceive two oranges, without noticing their difference in weight, smell, etc., then you or at least others around you would conclude that you were quite ill. What is your justification for believing that being unable to distinguish between things that are "computationally identical" would leave you any healthier?
0Sideways13y
I didn't intend to start a reductionist "race to the bottom," only to point out that minds and computations clearly do exist. "Reducible" and "non-existent" aren't synonyms! Since you prefer the question in your edit, I'll answer it directly: Computation is "privileged" only in the sense that computationally identical substitutions leave my mind, preferences, qualia, etc. intact; because those things are themselves computations. If you replaced my brain with a computationally equivalent computer weighing two tons, I would certainly notice a difference and consider myself harmed. But the harm wouldn't have been done to my mind. I feel like there must be something we've missed, because I'm still not sure where exactly we disagree. I'm pretty sure you don't think that qualia are reified in the brain-- that a surgeon could go in with tongs and pull out a little lump of qualia-- and I think you might even agree with the analogy that brains:hardware::minds:software. So if there's still a disagreement to be had, what is it? If qualia and other mental phenomena are not computational, then what are they?
-1dfranke13y
I do think that qualia are reified in the brain. I do not think that a surgeon could go in with tongs and remove them any more than he could in with tongs and remove your recognition of your grandmother. They're a physical effect caused by the operation of a brain, just as gravity is a physical effect of mass and temperature is a physical effect of Brownian motion. See here and here for one reason why I think the computational view falls somewhere in between problematic and not-even-wrong, inclusive. ETA: The "grandmother cell" might have been a poorly chosen counterexample, since apparently there's some research that sort of actually supports that notion with respect to face recognition. I learned the phrase as identifying a fallacy. Feel free to mentally substitute some other complex idea that is clearly not embodied in any discrete piece of the brain.
5wnoise13y
See for instance this report * http://www.scientificamerican.com/article.cfm?id=one-face-one-neuron on this paper * http://www.nature.com/nature/journal/v435/n7045/full/nature03687.html Where they find apparent "Jennifer Anniston" and "Halle Berry" cells. The former is a little bit muddled as it doesn't fire when a picture contains both her and Brad Pitt. The latter fires for both pictures of her, and the text of her name.
0FAWS13y
Do we know enough to tell for sure?
-1dfranke13y
Do you mean, "know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?". No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.
1gwern13y
Depending on various details, this might well be impossible. Rice's theorem comes to mind - if it's impossible to perfectly determine any interesting property for arbitrary Turing machines, that doesn't bode well for similar questions for Turing-equivalent substrates.
2dfranke13y
Brains, like PCs, aren't actually Turing-equivalent: they only have finite storage. To actually be equivalent to a Turing machine, they'd need something equivalent to a Turing machine's infinite tape. There's nothing analogous to Rice's theorem or the halting theorem which holds for finite state machines. All those problems are decidable. Of course, decidable doesn't mean tractable.
1gwern13y
It is true that you can run finite state machines until they either terminate or start looping or run past the Busy Beaver for that length of tape; but while you may avoid Rice's theorem by pointing out that 'actually brains are just FSMs', you replace it with another question, 'are they FSMs decidable within the length of tape available to us?' Given how fast the Busy Beaver grows, the answer is almost surely no - there is no runnable algorithm. Leading to the dilemma that either there are insufficient resources (per above), or it's impossible in principle (if there are unbounded resources there likely are unbounded brains and Rice's theorem applies again). (I know you understand this because you pointed out 'Of course, decidable doesn't mean tractable.' but it's not obvious to a lot of people and is worth noting.)
1dfranke13y
This is just a pedantic technical correction since we agree on all the practical implications, but nothing involving FSMs grows nearly as fast as Busy Beaver. The relevant complexity class for the hardest problems concerning FSMs, such as determining whether two regular expressions represent the same language, is the class of EXPSPACE-complete problems. This is as opposed to R for decidable problems, and RE and co-RE for semidecidable problems like the halting problem. Those classes are way, WAY bigger than EXPSPACE.
-3FAWS13y
Yes Potential, easily accessible concept space, not necessarily actually used concept space. Even granting the brain using some concepts without corresponding discrete anatomy I don't see how they can serve as a replacement in your argument when we can't identify them.
-1dfranke13y
The only role that this example-of-an-idea is playing in my argument is as an analogy to illustrate what I mean when I assert that qualia physically exist in the brain without there being such thing as a "qualia cell". You clearly already understand this concept, so is my particular choice of analogy so terribly important that it's necessary to nitpick over this?
-3FAWS13y
The very same uncertainty would also apply to qualia (assuming that even is a meaningful concept), only worse because we understand them even less. If we can't answer the question of whether a particular concept is embedded in discrete anatomy, how could we possibly answer that question for qualia when we can't even verify their existence in the first place?
0Sideways13y
You haven't excluded a computational explanation of qualia by saying this. You haven't even argued against it! Computations are physical phenomena that have meaningful consequences. "Mental phenomena are a physical effect caused by the operation of a brain." "The image on my computer monitor is a physical effect caused by the operation of the computer." I'm starting to think you're confused as a result of using language in a way that allows you to claim computations "don't exist," while qualia do. As to your linked comment: ISTM that qualia are what an experience feels like from the inside. Maybe it's just me, but qualia don't seem especially difficult to explain or understand. I don't think qualia would even be regarded as worth talking about, except that confused dualists try to use them against materialism.
0AstroCJ13y
If I have in front of me four apples that appear to me to be identical, but a specific two of them consistently are referred to as oranges by sources I normally trust, they are not computationally identical. If everyone perceived them as apples, I doubt I would be seen as ill.
0dfranke13y
I did a better job of phrasing my question in the edit I made to my original post than I did in my reply to Sideways that you responded to. Are you able to rephrase your response so that it answers the better version of the question? I can't figure out how to do so.
1AstroCJ13y
Ok, I'll give a longer response a go. You seem to me to be fundamentally confused about the separation between the (at a minimum) two levels of reality being proposed. We have a simulation, and we have a real world. If you affect things in the simulation, such as replacing Venus with a planet twice the mass of Venus, then they are not the same; the gravitational field will be different and the simulation will follow a path different to the simulation with the original Venus. These two options are not "computationally the same". If, on the other hand, in the real world you replace your old, badly programmed Venus Simulation Chip 2000 with the new, shiny Venus Simulation Chip XD500, which does precisely the same thing as the old chip but in fewer steps so we in the real world have to sit around waiting for fewer processor cycles to end, then the simulation will follow the same path as it would have done before. Observers in the sim won't know what Venus Chip we're running, and they won't know how many processor cycles it's taking to simulate it. These two different situations are "computationally the same". If, in the simulation world, you replaced half of my brain with an apple, then I would be dead. If you replaced half of my brain with a computer that mimicked perfectly my old meat brain, I would be fine. If we're in the computation world then we should point out that again, the gravitational field of my brain computer will likely be different from the gravitational field of my meat brain, and so I would label these as "not computationally the same" for clarity. If we are interested in my particular experiences of the world, given that I can't detect gravitational fields very well, then I would label them as "computationally the same" if I am substrate independent, and "computationally different" if not. I grew up in this universe, and my consciousness is embedded in a complex set of systems, my human brain, which is designed to make things make sense at any co

Maybe I've missed something in your original article or your comments, but I don't understand why you think a person in a perfect physics simulation of the universe would feel differently enough about the qualia he or she experiences to notice a difference. Qualia are probably a physical phenomenon, yes -- but if that physical phenomenon is simulated in exact detail, how can a simulated person tell the difference? Feelings about qualia are themselves qualia, and those qualia are also simulated by the physics simulator. Imagine for a moment, that some super... (read more)

Qualia or not, it seems a straightforward consequence of Tegmark's mathematical universe hypothesis that we derive significant proportions of our measure from simulators.

that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed -- what is your justification for believing this?

I think the quantum physics sequences basically argues this in a really roundabout way.

http://lesswrong.com/lw/qx/timeless_identity/

The answer is that I know my qualia are right because they make sense. Qualia are not pure "outputs": they feed back on the rest of the world. If I step outside on a scorching summer day, then I feel hot, and this unpleasant quale causes me to go back inside, and I am able to understand and articulate this cause and effect. If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don'

... (read more)

reasoning about a physical phenomenon is not the same as causing a physical phenomenon. You cannot create new territory by sketching a map of it, no matter how much detail you include in your map..

But reasoning about reasoning does cause reasoning.

0dfranke13y
It's the first "reasoning", not the second, that's causing the third. Reasoning about puppies causes reasoning, not puppies.
-1James_Miller13y
Is it possible for a simulator, that doesn't physically incorporate a human brain, to reason just as we do?
0dfranke13y
Yes.
[+][anonymous]13y-50