Three consistent positions for computationalists

5 Post author: dfranke 14 April 2011 01:15PM

Yesterday, as a followup to We are not living in a simulation, I posted Eight questions for computationalists in order to obtain a better idea of what exactly my computationalist critics were arguing.  These were the questions I asked:

  1. As it is used in the sentence "consciousness is really just computation", is computation:
    a) Something that an abstract machine does, as in "No oracle Turing machine can compute a decision to its own halting problem"?
    b) Something that a concrete machine does, as in "My calculator computed 2+2"?
    c) Or, is this distinction nonsensical or irrelevant?
  2. If you answered "a" or "c" to question 1: is there any particular model, or particular class of models, of computation, such as Turing machines, register machines, lambda calculus, etc., that needs to be used in order to explain what makes us conscious? Or, is any Turing-equivalent model equally valid?
  3. If you answered "b" or "c" to question 1: unpack what "the machine computed 2+2" means. What is that saying about the physical state of the machine before, during, and after the computation?
  4. Are you able to make any sense of the concept of "computing red"? If so, what does this mean?
  5. As far as consciousness goes, what matters in a computation: functions, or algorithms? That is, does any computation that give the same outputs for the same inputs feel the same from the inside (this is the "functions" answer), or do the intermediate steps matter (this is the "algorithms" answer)?
  6. Would an axiomatization (as opposed to a complete exposition of the implications of these axioms) of a Theory of Everything that can explain consciousness include definitions of any computational devices, such as "and gate"?
  7. Would an axiomatization of a Theory of Everything that can explain consciousness mention qualia?
  8. Are all computations in some sense conscious, or only certain kinds?

I got some interesting answers to these questions, and from them I can extract three distinct positions that seem consistent to me.

Consistent Position #1: Qualia skepticism

Perplexed asserted this position in no uncertain terms.  Here's my unpacking of it:

"Qualia do not exist. The things that you're confused about and are mistaking for qualia can be made clear to you using an argument phrased in terms of computation.  When you talk about consciousness, I think I can understand your meaning, but you aren't referring to anything fundamental or particularly well defined: it's an unnatural category."

The internal logic of the qualia skeptic's position makes sense to me, and I can't really respond to it other than by expressing personal incredulity. To me, the empirical evidence in support of the existence of qualia is so clear and so immediate that I can't figure out what you're not seeing so that I can point to it.  However, I shouldn't need to bring you to your senses (literally!) on this in order to convince you to reject Bostrom's simulation argument, albeit on grounds completely different than any I've argued so far.  If you don't buy that there's anything fundamental behind consciousness, then you also shouldn't buy Bostrom's anthropic reasoning in which he conjures up the reference class of "observers with human-type experiences"; elsewhere he refers to "conscious experience" and "subjective experience" without implication that he means anything more specific. That's taking an unnatural category and invoking it magically. In the statement that we are something selected with uniform probability from that group, how do you make sense of "are"?

Consistent Position #2: Computation is implicit in physics

This position is my best attempt at a synthesis of what TheOtherDave, lessdazed, and prase are getting at. It's compatible with position #1, but neither one entails the other.

To understand this position, it is helpful, but not necessary, to define the laws of physics in terms of something like a cellular automaton. Each application of the automaton's update rule can be understood as a primitive operation in a computation. When you apply the update rule repeatedly on cells nearby each other, you're building up a more complex computation. So, "consciousness is just computation" is equivalent in meaning, essentially, to "consciousness is just physics".

This position more-or-less necessitates answering "algorithms" to question #5, or if not that then at least something similar to RobinZ's answer. If you say "functions" then you at least need to explain how to reify the concepts of "input" and "output". You can pull this off by saying that the update rules are the functions, the inputs are the state before the rule application, and the outputs are the state afterward. Any other answer probably means you're taking something closer or identical to position #3 which I'll address next. This comment by peterdjones and his followups to it provide a (Searlesque) intuition pump showing other reasons why a "functions" reply is problematic.

I have no objection to this position. However, it does not imply substrate independence, and strongly suggests its negation. If your algorithmic primitives are defined at the level of individual update-rule applications, then any change whatsoever to an object's physical structure is a change to the algorithm that it embodies. If you accept position #2 while rejecting position #1, then you may actually be making the same argument that I am, merely in different vocabulary.

Consistent Position #3: Computation is reified by physics

I was both shocked and pleased to see zaph's answer to question #6, because it bites a bullet that I never believed anyone would bite: that there is actually something fundamental in the laws of physics which defines and reifies the concept of computation in a substrate-independent fashion. I can't find any inconsistency in this, but I think we have good reason to consider it extremely implausible. In the language of physics which is familiar to us and has served us well — the language whose vocabulary consists of things like "particle" and "force" and "Hilbert space" — the Kolmogorov complexity of a definition of an equivalence relation which tells us that an AND gate implemented in a MOSFET is equivalent to an AND gate implemented in a neuron is equivalent to an AND gate implemented in desert rocks, but is not equivalent to an OR gate implemented in any of those media — is enormous. Therefore, Solomonoff induction tells us that we should assign vanishingly low probability to such a hypothesis.

 

I hope that I've fairly represented the views of at least a majority of computationalists on LW. If you think there's another position available, or if you're one of the people I've called out by name and you think I've pigeonholed you incorrectly, please explain yourself.

Comments (176)

Comment author: cousin_it 14 April 2011 01:52:23PM *  13 points [-]

Hmm. My comment is the most highly upvoted response to your survey at the moment, and the second highest upvoted one is by XiXiDu expressing basically the same position as mine, but I don't see it on your list. Here's a summary: we don't yet have enough insight to choose any specific answer or even to know if we're asking the right questions. We're facing an unsolved scientific problem. The wisdom of crowds doesn't apply here. If no one has yet discovered Maxwell's equations or Watson and Crick's double helix, no amount of surveying can lead you to the right answer. You have to do, like, actual math and physics and biology and stuff.

Comment author: Perplexed 14 April 2011 02:15:41PM 3 points [-]

We're facing an unsolved scientific problem. You can't solve it by survey.

Interesting, particularly in light of the recent "What is analytic philosophy, that we should be mindful of it?" discussions. It almost seems that dfranke, taking the philosopher's role, should respond: "We are facing an unsolved philosophical problem. You can't contribute to the solution without taking a position."

Comment author: dfranke 14 April 2011 02:50:24PM 0 points [-]

I agree, and furthermore this is a true statement regardless of whether you classify the problem as philosophical or scientific. You can't do science without picking some hypotheses to test.

Comment author: JoshuaZ 14 April 2011 02:56:40PM 4 points [-]

I agree, and furthermore this is a true statement regardless of whether you classify the problem as philosophical or scientific. You can't do science without picking some hypotheses to test.

That's not strictly speaking true. First of all, this doesn't quite match what Perplexed said since Perplexed was talking about taking a position. I can decide to test a hypothesis without taking a position on it. Second of all, a lot of good science is just "let's see what happens if I do this." A lot of early chemistry was just sticking together various substances and seeing what happened. Similarly, a lot of the early work with electricity was just systematically seeing what could and could not conduct. It was only later that patterns any more complicated than "metals conduct" developed. (Priestly's The History and Present State of Electricity gives a detailed account of the early research into electricity by someone who was deeply involved in it. The archaic language is sometimes difficult to read but overall the book is surprisingly readable and interesting for something that he wrote in the mid 1700s.)

Comment author: dfranke 14 April 2011 03:04:56PM 0 points [-]

Those early experimenters with electricity were still taking a position whether they knew it or not: namely, that "will this conduct?" is a productive question to ask -- that if p is the subjective probability that it will, then p*(1-p) is a sufficiently large value that the experiment is worth their time.

Comment author: JoshuaZ 14 April 2011 03:11:24PM 2 points [-]

Ok. Yes, this connects to the theory-laden nature of observation and experimentation. But that's distinct from having any substantial hypotheses about the nature of electricity which would be closer to the sort of thing that would be analogous to what Perplexed was talking about. (It is possible that I'm misinterpreting the statement's intention.)

Comment author: Perplexed 14 April 2011 03:26:19PM 3 points [-]

Perplexed intended to contrast science - where it is not respectable to take a position in advance of evidence (pace Karl P.) - with philosophy - where it is the taking and defending of positions which drives the whole process. Last philosopher left standing wins. You can't win if you don't take a stand.

Comment author: JoshuaZ 14 April 2011 03:35:35PM *  2 points [-]

Perplexed intended to contrast science - where it is not respectable to take a position in advance of evidence (pace Karl P.) - with philosophy - where it is the taking and defending of positions which drives the whole process

Thanks for clarifying. Is that true though? If so, I'd suggest that that might be a problem about how we do philosophy more than anything else. If I don't have evidence or good arguments either way on a philosophical question I shouldn't take a stand on it. I should just acknowledge the weak arguments for or against the relevant positions.

Comment author: dfranke 14 April 2011 03:40:24PM *  1 point [-]

There are no specifically philosophical truths, only specifically philosophical questions. Philosophy is the precursor to science; its job is to help us state our hypotheses clearly enough that we can test them scientifically. ETA: For example, if you want to determine how many angels can dance on the head of a pin, it's philosophy's job to either clarify or reject as nonsensical the concept of an angel, and then in the former case to hand off to science the problem of tracking down some angels to participate in a pin-dancing study.

Comment author: jimrandomh 14 April 2011 02:21:37PM 4 points [-]

I agree with this, but would like to add that it's actually one step worse: most of the interesting experiments one can do with regard to consciousness have results that, for various reasons, cannot be transferred between observers. The quantum immortality hypothesis is one example - if someone else does an experiment, you don't get to see the result. But the problem is more general; you also don't get to see the results of experiments that other entities perform to test observer symmetry, or the subjective results of self-copying and merging. So the only information we can have is prior probabilities, which are not very informative, with no experimental data. Perhaps after a dozen one-way trips through cryopreservation, nested simulations, afterlives, etc., I'll have an answer to how subjective experience works; but no one will ever find an answer in this universe, or convey an answer back to it, so the question has little point here.

Comment author: cousin_it 14 April 2011 02:30:17PM *  11 points [-]

I don't like statements like "we can never know" this or that. For example, you can convince everyone that quantum immortality works by killing them along with yourself. (This shouldn't pose any risk if you've already convinced yourself :-) Paul Almond has proposed that this can solve the Fermi paradox: we don't see alien civilizations because they have learned to solve complex computational problems by civilization-level quantum suicide, and thus disappeared from our view.

It seems probable to me that if we think a little harder, we can figure out a way to investigate observer-dependent statements scientifically.

Comment author: dfranke 14 April 2011 02:52:51PM 3 points [-]

I didn't list this position because it's out of scope for the topic I'm addressing. I'm not trying to address every position on the simulation hypothesis; I'm trying to address computationalist positions. If you think we are completely in the dark on the matter, you can't be endorsing computationalists, who claim to know something.

Comment author: Psychohistorian 14 April 2011 03:36:25PM 19 points [-]

I believe I have found the perfect, modern theory of consciousness, completely supported by every study yet done on the matter!

"We really don't know what's going on. More research is needed. But there's probably no magic involved."

Comment author: prase 14 April 2011 07:38:44PM *  4 points [-]

Maybe I should clarify a bit. I have two intuitions about the relation of consciousness and calculation. The first is that abstract existence of a computation, in the mathematical sense (where "X exists" basically means that the definition of X is free of contradictions), doesn't guarantee consciousness. The computations should be physically implemented somewhere, by which I mean there should be a physical structure isomorphic to the abstract process of computation.

The second intuition is that the specific qualities of consciousness should be invariant with respect to some transformations of the physical implementation. One can get a quale of hearing a high-pitched sound by actually hearing it, or because of dozens of physically different causes, many of which lie inside the brain. And because not all details of the physical structure are important, there must be some property which the systems with indistinguishable qualia share, and there is a non-negligible chance that this property is computational isomorphism between these systems. So, I don't express the same objection as you in other language, since I think there is a non-negligible probability that the simulation could be isomorphic to the real world to degree which enables the same qualia.

(Edit: even if the qualia of the simulated agents are different from qualia of the real agents, how does this constitute an argument against us being in a simulation? If so, we know our simulated qualia and not the real ones and can't compare.)

The most confusing question to me is how the boundaries between different conscious systems are set, i.e. why aren't there two or more consciousnesses in one brain or one consciousness in more brains. The question is not only confusing, but probably confused, but I don't see a resolution. But this is off topic here anyway.

I would not bet much money on any of the above positions.

Comment author: shokwave 14 April 2011 05:29:51PM 5 points [-]

To me, the empirical evidence in support of the existence of qualia is so clear and so immediate that I can't figure out what you're not seeing so that I can point to it.

I ... don't think there's much empirical support for the actual existence of the painfulness of pain. Sure, humans experience pain in very similar ways, and you can lump all those experiences into the category pain, and talk about what characteristics are present in all the category members, but those common characteristics aren't a physical object somewhere called painfulness.

As for how this bears on Bostrom's simulation argument: I'm not familiarized with it properly, but how much of its force does it lose by not being able to appeal to consciousness-based reference classes and the like? I can't see how that would make simulations impossible; nearest I can guess is that it harms his conclusion that we are probably in a simulation?

That can be repaired in other ways; given that time travels in one direction for us, our experiences have one chance to be in the real universe, and n chances to be in simulated universes - where n is the total computational power that ever will be directed at simulating historical moments, over the computational cost of simulating a historical moment multiplied by the number of moments at least as interesting as this one. Even if you assign a low probability to the future containing computational power (ie we nuke ourselves before Matroishka shells or Jupiter brains are completed or something), that low chance times n is still large relative to 1. So our prior for being in a simulation should still be high.

Comment author: Peterdjones 15 April 2011 02:56:09PM 3 points [-]

I can't see how it is remotely relevant that painfulness isn't a physical object.Electron spin isn't either.

Comment author: dfranke 14 April 2011 06:31:11PM *  0 points [-]

As for how this bears on Bostrom's simulation argument: I'm not familiarized with it properly, but how much of its force does it lose by not being able to appeal to consciousness-based reference classes and the like? I can't see how that would make simulations impossible; nearest I can guess is that it harms his conclusion that we are probably in a simulation?

Right. All the probabilistic reasoning breaks down, and if your re-explanation patches things at all I don't understand how. Without reference to consciousness I don't know how to make sense of the "our" in "our experiences". Who is the observer who is sampling himself out of a pool of identical copies?

Anthropics is confusing enough to me that it's possible that I'm making an argument whose conclusion doesn't depend on its hypothesis, and that the argument I should actually be making is that this part of Bostrom's reasoning is nonsense regardless of whether you believe in qualia or not.

Comment author: pjeby 14 April 2011 07:35:25PM *  3 points [-]

You've missed a major position: that the entire idea of "substrate independence" is a red herring. Detecting the similarity of two patterns is something that happens in your brain, not something that's part of reality.

This whole thing, AFAICT, is an attempt to have an argument war, rather than an attempt to understand/find truth. It is possible that no position on this subject makes any sense whatsoever, for example.

Or, to put it another way, failure to offer a coherent refutation of an incoherent hypothesis doesn't represent evidence for the incoherent hypothesis.

Comment author: [deleted] 15 April 2011 07:26:12AM 1 point [-]

"Or, to put it another way, failure to offer a coherent refutation of an incoherent hypothesis doesn't represent evidence for the incoherent hypothesis."

Although perhaps a tangent, the point is important: the above is quite wrong. If A believes h is incoherent, but B is unable to demonstrate h's incoherence, A should regard B's inability to coherently explain h's incoherence as evidence that h is not , in fact, incoherent. (I think that's what you mean to deny.). This is because (at least in ordinary circumstances) A should regard h as more probably true if B can explain why; less probable if B can't.

The denial that B's failure to coherently explain h's incoherence increases the probability that h is coherent expresses the common failure to regard others' beliefs as evidence of what's true. This fallacy is why Aumann's agreement theorem seems so counter-intuitive to many people. (See my fishbowl analogy at http://tinyurl.com/3lxp2eh)

Comment author: pjeby 15 April 2011 05:46:17PM *  1 point [-]

If A believes h is incoherent, but B is unable to demonstrate h's incoherence, A should regard B's inability to coherently explain h's incoherence as evidence that h is not , in fact, incoherent.

Er, 'A' believes 'h' is coherent in this case.

I have realized, though that my statement was profoundly unclear, even after the edit.

Let me attempt to rephrase yet again, more precisely:

"If a bunch of people on LW tell you your hypothesis is incoherent and you need to dissolve your question, this should not be considered evidence that your hypothesis is sound, merely because nobody directly refuted your incoherence, in terms currently comprehensible by you."

Or, by analogy, if you go to a biology forum and ask about missing links or why there are still apes, and then when you get explanations that dissolve the wrong questions involved, you say, "aha, but you still haven't answered my [wrong] question, so therefore I'm right", this is not sound argument.

The denial that B's failure to coherently explain h's incoherence increases the probability that h is coherent expresses the common failure to regard others' beliefs as evidence of what's true.

In this case, though, the incoherence has actually been quite clearly counterargued by many, and is already thoroughly refuted by the sequences.

Comment author: Jonathan_Graehl 14 April 2011 10:29:31PM *  1 point [-]

Or, to put it another way, failure to offer a coherent refutation of an incoherent hypothesis doesn't represent evidence for incoherence hypothesis.

Could you edit this? I can't decipher it.

[eta: Cyan and Pavitra have come up with nice obviously-true statements that are textually similar to the original bungled sentence and similar in meaning, but I can't be sure of what you meant.]

Comment author: pjeby 15 April 2011 01:40:42AM 3 points [-]

Could you edit this? I can't decipher it

Sorry, that was a messed up edit - I was at first writing "doesn't represent evidence for incoherence" and then messed up the edit to "doesn't represent evidence for the incoherent hypothesis".

More colloquially, if somebody can't coherently answer your incoherent question, it doesn't mean that the viewpoint which created the question is therefore sensible or true.

Comment author: Cyan 14 April 2011 11:13:48PM 2 points [-]

How about, "If I offer a not-even-wrong refutation of your not-even-wrong hypothesis, you can't take the not-even-wrongness of the refutation as evidence for the hypothesis."

Comment author: Pavitra 14 April 2011 11:01:14PM 1 point [-]

I read it to mean that once one has demonstrated a hypothesis to be incoherent, one does not then also need to demonstrate it to be false.

Comment author: dfranke 14 April 2011 07:50:52PM 1 point [-]

Detecting the similarity of two patterns is something that happens in your brain, not something that's part of reality.

If I'm correctly understanding what you mean by "part of reality" here, then I agree. This kind of "similarity" is another unnatural category. When I made reference in my original post to the level of granularity "sufficient in order model all the essential features of human consciousness", I didn't mean this as a binary proposition; just for it to be sufficient that if while you slept somebody made changes to your brain at any smaller level, you wouldn't wake up thinking "I feel weird".

Comment author: pjeby 15 April 2011 01:49:27AM 2 points [-]

if while you slept somebody made changes to your brain at any smaller level, you wouldn't wake up thinking "I feel weird".

I have no reason to assume that you couldn't replace me entirely, piece by piece. After all, I have different cells now than I did previously, and will have different cells later, and all the while still perceive myself the same.

The only thing weird here, is the idea that I would somehow notice. I mean, if I could notice, it wouldn't be a very good replacement, would it?

(Actually, given my experience with mind hacking, my observation is that it's very difficult to notice certain background characteristics of one's thought processes, such that even if a machine translation did introduce a systematic distortion, it seems unlikely to me that anyone would notice it in themselves, at least easily or at first!)

Comment author: Peterdjones 15 April 2011 02:48:20PM 0 points [-]

One can only detect, as opposed to invent, what is already there. Being a NAND gate is not a physical property that is already there, nonetheless not everything is a NAND gate. There are constraints on what substrate can do what, but they are not fully determinate facts, for all that they are not imaginary.

Comment author: pjeby 15 April 2011 05:54:35PM *  1 point [-]

One can only detect, as opposed to invent, what is already there. Being a NAND gate is not a physical property that is already there,

Actually, "NAND Gate" is a term that we use to label something that is there --a tag we assign to patterns in the physical world that follow similar patterns of behavior to a representation we hold in our minds.

This is a bit like trees falling in the forest. If there is nobody there to label it a NAND gate, then it will still do the exact same thing... but there's no "NAND gate" there.

And, when the person does show up and label it, there's still no "NAND gate" there... there's just a label in that person's mind, saying, "that thing there is a NAND gate".

Not understanding this basic concept (that reality does not contain any labels, and has no "is-ness") leads to all sorts of confusion.

(Sadly, this kind of confusion is also the natural human state.)

Comment author: Peterdjones 15 April 2011 07:04:54PM 2 points [-]

If it's doing what a NAND gate does, it's a NAND gate. Reality does not come pre-labelled,but things also do not spring into existence just because someone has labelled them.

Comment author: pjeby 15 April 2011 09:49:01PM 0 points [-]

If it's doing what a NAND gate does, it's a NAND gate.

Only if you think that "X is Y" means something other than, "My brain has associated the label Y with the cluster of sensory experiences denoted by X".

Comment author: Peterdjones 16 April 2011 06:24:44PM 1 point [-]

I do: I think it means "X is a mind-independent object that would and should be labelled Y by an onlooker speaking my language". I believe there are stars and planets no one has ever seen, or had a chance to label as such, Don't you?

Comment author: pjeby 16 April 2011 09:19:19PM 1 point [-]

I think it means "X is a mind-independent object that would and should be labelled Y by an onlooker speaking my language"

I think you've missed the part where that is still a label in your mind, being attached to a cluster of sensory experiences.

I believe there are stars and planets no one has ever seen, or had a chance to label as such, Don't you?

In such cases, the sensory experience clusters you're labeling are memories associated with the labels "star" and "planet".

However, this has little to do with an X-is-Y identity. In order to say "X is Y", there has to be an X and a Y, and you are speaking only here of the hypothesized existence of various X's that you would then label Y.

In any event, this and this are relevant here, in case you've missed them.

Comment author: lessdazed 15 April 2011 03:09:48AM 1 point [-]

I have no objection to this position. However, it does not imply substrate independence, and strongly suggests its negation.

I disagree, and think that in any case substrate independence is of two types. The directions are: replacing basic units with complex units and replacing complex units with other complex units. Replacing basic units with complex units that do the same thing the basic unit did preserves equations that treated the basic unit as basic. I will attempt to explain.

Consciousness is presumably not a unique property of one specific system. If you've been conscious over the course of reading this sentence, multiple physical patterns have been conscious. I am quite different than I was ten years ago and am also quite different than my grandmother and someone living in an uncontacted tribe, also conscious beings. If all humans are conscious, no line between consciousness and non-consciousness will be found within the range of human brain variation.

Whole brains, complex things, can be replaced with giant lookup tables, different complex things, and not have consciousness. The output of "Yes" as an answer to a specific question may be identical between the systems, but the internal computations are different, so it is logically possible that the new computations are not within the wide realm of computations that produce consciousness.

Above I was referring to replacing complex biological units with complex mechanical units, in which "substrate independence" will depend on the specifics of the replacement done. However, all replacement of a unit that is basic with a more complicated unit that will give the same output for each input will leave the conscious system intact as the old equations will not be altered.

For example: suppose that a mechanical system of gears and pulleys produces knives (or consciousness) and clanks. It is possible to replace a gear with a sub-system consisting of: a set of range finders, a computer, mechanical hands, and speakers. The sub-system can measure what surrounding gears are doing and use the hands to spin gears as if the missing gear were in place, and use the speakers to make noises as if the old gear was in place.

Everything produced by the old system will also be produced my the new system, though the new system may also produce something else, such as GTA on the computer. This is because we replaced a basic unit with a more complicated system that produces additional things.

Similarly, replacing biological cells with analogously functional mechanical cells should certainly preserve consciousness. Probably, but not by logical necessity, cells are not needed to produce consciousness as a computed output.

tl;dr: computationalism implies substrate independence insofar as anything upon which computations act may be replaced by anything of any form, with the only requirement being to give the same outputs as the old unit would have. Anything a computation uses by mapping it first may be replaced by anything that would be identically mapped.

Comment author: torekp 16 April 2011 12:04:39AM 1 point [-]

Agreed that "replacing biological cells with analogously functional mechanical cells should certainly preserve consciousness," but this is a very limited sort of substrate "independence". This approach makes the difficulty of producing an AI with consciousness-as-we-know-it much more severe. Evolution finds local optima, while intelligent design is more flexible, so I expect AI to take off much faster and more successfully, at some point, in a different direction, rather than brain emulation.

Like dfranke, I favor option #2, but like peterdjones, I don't think it fits under "computationalism".

Comment author: dfranke 15 April 2011 03:38:47AM 0 points [-]

This sounds an awful lot like "making the same argument that I am, merely in different vocabulary". You say po-tay-to, I say po-tah-to, you say "computations", I say "physical phenomena". Take the example of the spark-plug brain from my earlier post. If the computer-with-spark-plugs-attached is conscious but the computer alone is not, do you still consider this confirmation of substrate independence? If so, then I think you're using an even weaker definition of the term than I am. How about xkcd's desert? If you replace the guy moving the rocks around with a crudely-built robot moving the rocks in the same pattern, do you think it's plausible that anything in that system experiences human-like consciousness? If you say "no", then I don't know whether we're disagreeing on anything.

Comment author: lessdazed 15 April 2011 06:40:43AM 1 point [-]

making the same argument that I am, merely in different vocabulary

I don't necessarily understand your argument. Recall I don't understand one of your questions. I think you disagree with some of my answers to your questions, but you hinted that you don't think my answers are inconsistent. So I'm really not sure what's going on.

If the computer-with-spark-plugs-attached is conscious...do you still consider this confirmation of substrate independence?

Not every substance can perform every sub-part role in a consciousness producing computation, so there's a limit to "independence". Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I'm not sure what you mean.

To me, what is important is to establish that there's nothing magical about bio-goo needed for consciousness, and as far as exactly which possible computers are conscious, I don't know.

If you replace the guy moving the rocks around with a crudely-built robot moving the rocks in the same pattern, do you think it's plausible that anything in that system experiences human-like consciousness?

Plausible? What does that mean, exactly?

Comment author: Peterdjones 15 April 2011 02:31:59PM 1 point [-]

The substrate independence of computation (without regard to consciousness) is well known, and just means that more than one material system can implement a programme, not that any system can. If consciousness is more "fussy" about its substrate than a typical programme, then in a strict sense, computationalism is false. (Although AI, which is a broader claim, could still be true).

Comment author: dfranke 15 April 2011 12:51:55PM *  0 points [-]

Plausible? What does that mean, exactly?

What subjective probability would you assign to it?

Not every substance can perform every sub-part role in a consciousness producing computation, so there's a limit to "independence". Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I'm not sure what you mean.

I don't know what the "usual" point of contention is, but this isn't the one I'm taking a position in opposition to Bostrom on. Look again at my original post and how Bostrom defined substrate-independence and how I paraphrased it. Both Bostrom's definition and mine mean that xkcd's desert and certain Giant Look-Up Tables are conscious.

Comment author: TheOtherDave 14 April 2011 08:16:50PM 1 point [-]

I'll accept option #2 as close enough to my view.

Wrt necessitating an "algorithms" view for q5... maybe. My initial answer there was to observe confusion, either in myself or the question, precisely in the area you point out here: it's unclear how the labels "input" and "output" map to anything we're talking about. I don't reject your proposed mapping, but I don't find it especially compelling either. I'm not sure that those labels necessarily do mean anything, actually.

Wrt not implying substrate independence: sure, I agree in principle; it's not impossible that only protoplasmic substrates can implement consciousness. All I'm saying is that if that turns out to be true, it will be because certain kinds of computations can only be performed on protoplasmic machines.

Similarly, to say that heavier-than-air flight is a property of certain mechanical operations doesn't imply substrate-independence for flight; it might be true that those mechanical operations can only be performed by protoplasmic machines.

That said, that would be a surprising result in both cases. Once we built/discovered a heavier-than-air nonprotoplasmic flying machine, the idea that doing so was impossible was rightly discarded; I expect something similar to happen with nonprotoplasmic consciousnesses.

As for strongly implying the absence of substrate-independence: sure, in the strict sense you mean it here, that's true. Change the substrate and there will always be some difference, even if it turns out to be a difference you-the-observer could not conceivably care less about.

I suppose I could say my understanding of substrate-independence is implicitly a 2-place predicate: system S is subtrate-independent with respect to observer O iff O considers some system S2 identical to S, where S is implemented on a different substrate than S2.

A 1-place version, I agree, is unlikely on my view (its negation is, as you say, strongly suggested). I would also say that time-independence (that is, the idea that my consciousness is precisely the same from one moment to the next) is equally unlikely. I would also say that neither of these things matters very much.

Comment author: bogus 17 April 2011 12:09:49AM *  1 point [-]

Wrt not implying substrate independence: sure, I agree in principle; it's not impossible that only protoplasmic substrates can implement consciousness. All I'm saying is that if that turns out to be true, it will be because certain kinds of computations can only be performed on protoplasmic machines.

Physicalists can reject substrate independence and accept the Church-Turing thesis, while still taking consciousness seriously. One can argue that consciousness in the physical world is implemented on protoplasm, and that this is the only kind of consciousness which is directly experienced. The fact that conscious beings can be simulated on a computer would be true but irrelevant.

Comment author: Peterdjones 18 April 2011 07:44:53PM 0 points [-]

Physcalists can't reject substrae independence and accept the Computational Theory of Mind, however.

Comment author: Peterdjones 16 April 2011 07:07:22PM *  0 points [-]

Wrt not implying substrate independence: sure, I agree in principle; it's not >impossible that only protoplasmic substrates can implement consciousness. All I'm saying is that if that turns out to be true, it will be because certain kinds of computations can only be performed on protoplasmic machines.

That is false, since we can build Universal Turing Machines (up to a certain finite memory) out of non-protoplasm, and a UTM can compute anything.

I suppose I could say my understanding of substrate-independence is implicitly a >2-place predicate: system S is subtrate-independent with respect to observer O iff O >considers some system S2 identical to S, where S is implemented on a different >substrate than S2.

An observer-relative notion of computation is problematic for a computational theory of consc, since an observer-relative notion of consciousness is problematics. Surely the point is that I know i am conscious, not that he thinks I am.

Comment author: wnoise 16 April 2011 09:12:25PM 0 points [-]

That is false, since we can build Universal Turing Machines (up to a certain finite memory) out of non-protoplasm, and a UTM can compute anything.

You have a proof of the Church-Turing thesis? You should write it up and become famous in the CS community!

Comment author: Peterdjones 18 April 2011 07:49:00PM *  -1 points [-]

The other guy needs a disproof of the CTT...an effective procedure that can only be computed in protoplasm.

Comment author: zaph 14 April 2011 05:36:11PM 1 point [-]

I guess the only quibble I would have, and I don't know that it really changes your critique much, is that I wrote that neurons would be some sort of gate equivalent. I wouldn't say that neurons have a simple gate model (that they're simply an AND or an XOR, for instance). But I do see them as being in some sense Boolean. Anyway, I would just try to clarify my fairly short answer to say that I believe that computation can always be broken down into smaller Boolean steps, and these steps could be rendered in many different media.

Computationality in any fashion needs to be reified by physics doesn't it? Otherwise it wouldn't exist. Now, I would say it's an emergent feature; physics doesn't need to provide anything beyond what is provided for anything else to explain it. Maybe that's the point of contention?

Comment author: dfranke 14 April 2011 05:54:37PM *  0 points [-]

I'm not trying to hold you to any Platonic claim that there's any unique set of computational primitives that are more ontologically privileged than others. It's of course perfectly equivalent to say that it's NOR gates that are primitive, or that you should be using gates with three-state rather than two state inputs, or whatever. But whatever set of primitives you settle on, you need to settle on something, and I don't think there's any such something which invalidates my claim about K-complexity when expressed in formal language familiar to physics.

Comment author: complexmeme 15 April 2011 06:32:06PM *  0 points [-]

the Kolmogorov complexity of a definition of an equivalence relation which tells us that an AND gate implemented in a MOSFET is equivalent to an AND gate implemented in a neuron is equivalent to an AND gate implemented in desert rocks

Isn't that only a problem for those who answer "functions" to question 5? Desert-rocks-AND-gate and MOSFET-AND-gate are functionally-equivalent by definition, but if you're not excluding side-effects it's obvious that they're not computationally equivalent.

Edit: zaph answered algorithms, so your counter-argument doesn't really target him well.

Comment author: dfranke 15 April 2011 06:40:50PM *  1 point [-]

They're computationally equivalent by hypothesis. The thesis of substrate independence is that as far as consciousness is concerned the side effects don't matter and that capturing the essential sameness of the "AND" computation is all that does. If you're having trouble understanding this, I can't blame you in the slightest, because it's that bizarre.

Comment author: complexmeme 16 May 2011 04:36:44PM 0 points [-]

(Didn't realize this site doesn't email reply notifications, thus the delayed response.)

What I'm saying is that someone who answers "algorithms" is clearly not taking that view of substrate-independence, but they could hypothesize that only some side-effects matter. A MOSFET-brain-simulation and a desert-rocks-brain-simulation could share computational properties beyond input-output, even though the side-effects are clearly not identical.

(Not saying that I endorse that hypothesis, just that it's not the same as the "side effects don't matter" version.)

Comment author: Protagoras 14 April 2011 07:29:35PM -1 points [-]

I wonder if I'm a qualia skeptic. I think that qualia are Humean impressions, the most "forceful and vivacious" contents of the mind. Dan Dennett has recently revived this view (without sufficiently crediting Hume, sadly); at one point he calls it the fame model of consciousness. What makes a thought conscious is that it does a lot; it has a very rich variety of interactions with other things going on in the mind.

This explains why there can be perception without consciousness; the much discussed (by philosophers) case of blindsight is an example where visual perception has a much more limited impact than usual, and so doesn't have enough force and vivacity (or fame or clout if you prefer Dennett's terminology, or whatever you want to call it) to feel conscious. And that's why something like the David Lewis "mad pain" case is probably possible; the range of different interactions a conscious experience has is sufficiently great that even something lacking one of the core functions of experiences of a certain type could still probably feel pretty much like that experience if it had enough of the right secondary connections.

I think I'm talking about qualia when I talk about these Hume/Dennett items. But I'm talking about things with certain kinds of functionally defined inputs and outputs, a certain kind of computations, in fact. Does this mean I am not talking about qualia as you mean them? If not, then I stand with perplexed; references to qualia should be committed to the flames, for they can contain nothing but sophistry and illusion.

Comment author: Peterdjones 14 April 2011 07:37:31PM 0 points [-]

Can you solve the "explaining colour to a blind man" problem with this proposal? I think not: vivid blue is just as famous and vivacious as vivid green, but that does not tell us what blue and green are...what their phenomenal feels are.

Comment author: Protagoras 14 April 2011 07:47:24PM -1 points [-]

It is a little rude of you not to wait for me to answer before insisting that I can't. And it wouldn't hurt to be clear about what the problem is anyway. Color is extremely complicated, and most of the associations that make color perception conscious are not themselves conscious, so I personally certainly couldn't explain color to a blind man. But there's no problem there. And if someone did know enough about color to explain all the associations that it has, well, having associations explained to you isn't normally enough for you to make the same associations in the same way yourself, so perhaps it couldn't enable the blind man to imagine the color. But it's hard to say, and anyway, I don't know why he'd need to be able to imagine it to know what it was. I can say that when I've read articles about how echolocation works, and what sorts of things it reveals or conceals, I've felt like I know a tiny bit more about what it's like to be a bat than I did before reading the articles.

Comment author: JoshuaZ 14 April 2011 08:01:19PM 1 point [-]

It is a little rude of you not to wait for me to answer before insisting that I can't.

I think you are interpreting Peter's comment in an overly negative fashion. I believe he simply means "it seems that this proposal won't solve the problem of explaining color to a blind person" or something close to that.

Comment author: Protagoras 14 April 2011 09:30:34PM 0 points [-]

I suppose you're right that I was a little snappy, but his response did seem to indicate that he wasn't really paying attention. Indeed, my response to him was too charitable; on rereading Peterdjones comment, he seems to have been responding to some straw-man view on which I claimed it was vividness that made green green, while I responded as if he'd tried to address my actual view (that vividness makes green conscious, and other functional characteristics make it green).

Comment author: Peterdjones 15 April 2011 02:24:58PM *  0 points [-]

The claim that consciousness is fame in the brain, and the claim that qualia are incommunicable because of complexity are somewhat contradictory, because what is made famous in the brain can be subjectively quite simple, but remains incommunicable.

A visual field of pure blue, or a sustained note of C#, is not fundamentally easier to convey than some complex sensation. Whilst there maybe complex subconscious processing and webs of association involved in the production of qualia, qualia can be simple as presented to consciousness. The way qualia seems is the way they are, since they are defined as seemings. And these apparently simple qualia are still incommunicable, so the problem of communicating qualia is not the problem of communicating complexity.

Something that is famous in the brain needs to have a compelling quality, and some qualia, such as pains have that in abundance. However, others do not. The opposite of blindsight — access consciousness without phenomenal consciousness — is phenomenal consciousness without access consciousness, for instance "seeing something out of the corner of ones eye. Not only are qualia not uniformally compelling, but one can have mental content that is compelling, but cognitive rather than phenomenal, for instance an obsession or idée fixe; .

"And if someone did know enough about color to explain all the associations that it has, well, having associations explained to you isn't normally enough for you to make the same associations in the same way yourself, "

To some physicalists, it seems obvious that a physcial description of brain state won't convey what that state is like, because it doesn't put you into that state. Of course, a description of a brain state won't put you into a brain state, any more than a description of photosynthesis will make you photosynthesise. But we do expect that the description of photosynthesis is complete, and actually being able to photosynthesise would not add anything to our knowledge. We don't expect that about experience. We expect that to grasp what the experience is like, you have to have it. If the 3rd-person description told you what the experience was like, explained it experientially, the question of instantiating the brain-state would be redundant. The fact that these physicalists feel it would be in some way necessary means they subscribe to some special, indescribable aspect of experience even in contradiction to the version of physicalism that states that everything can be explained in physicalese.Everything means everything — include some process whereby things seem differnt from the inside than they look from the outide. They still subscribe to the idea that there is a difference between knowledge-by-aquaintance and knowledge-by-description, and that is the distinction that causes the trouble for all-embracing explanatory physicalism.

Weaker forms of physicalism are still posible, however.

"can say that when I've read articles about how echolocation works, and what sorts of things it reveals or conceals, I've felt like I know a tiny bit more about what it's like to be a bat than I did before reading the articles."

But everyone has the experience of suddenly finding out a lot more about something when they experience it themselves. That is what underpins the knowledge-by-acquaintance versus knowledge-by-description distinction.

Comment author: dfranke 15 April 2011 03:10:40PM *  3 points [-]

I think that the "Mary's Room" thought experiment leads our intuitions astray in a direction completely orthogonal to any remotely interesting question. The confusion can be clarified by taking a biological view of what "knowledge" means. When we talk about our "knowledge" of red, what we're talking about is what experiencing the sensation of red did to our hippocampus. In principle, you could perform surgery on Mary's brain that would give her the same kind of memory of red that anyone else has, and given the appropriate technology she could perform the same surgery on herself. However, in the absence of any source of red light, the surgery is required. No amount of simple book study is ever going to impact her brain the same way the surgery would, and this distinction is what leads our intuitions astray. Clarifying this, however, does not bring us any closer to solving the central mystery, which is just what the heck is going on in our brain during the sensation of red.

Comment author: Peterdjones 15 April 2011 03:24:20PM 0 points [-]

To say that the surgery is required is a to say that there is knowledge not conveyed by third persons descriptions, and that is a problem for sweeping claims of physicalism. That is the philosophical problem, it is a problem about how successful science could be.

The other problem, of figuring out what brains do, is a hard problem, but it is not the same, because it is a problem within science.

Comment author: dfranke 15 April 2011 03:32:43PM *  2 points [-]

To say that the surgery is required is to say that there is knowledge not conveyed by third persons descriptions, and that is a problem for sweeping claims of physicalism.

No it isn't. All it says is that the parts of our brain that interpret written language are hooked up to different parts of our hippocampus than our visual cortex is, and that no set of signals on one input port will ever cause the hippocampus to react in the same way that signals on the other port will.

Comment author: Peterdjones 15 April 2011 03:38:08PM 0 points [-]

But if physicalism is correct, one could understand all that in its entirety from a third person POV, just as one can understand photosynthesis without photosynthesising. And of course, Mary is supposed to have that kind of knowledge. But you think that knowledge of how her brain works from the outside is inadequate, and she has to make changes to her brain so she can view them from the inside.

Comment author: dfranke 15 April 2011 03:45:37PM *  1 point [-]

The very premise of "Mary is supposed to have that kind of knowledge" implies that her brain is already in the requisite configuration that the surgery would produce. But if it's not already in that configuration, she's not going to be able to get it into that configuration just by looking at the right sequence of squiggles on paper. All knowledge can be represented by a bunch of 1's and 0's, and Mary can interpret those 1's and 0's as a HOWTO for a surgical procedure. But the knowledge itself consists of a certain configuration of neurons, not 1's and 0's.

Comment author: Peterdjones 15 April 2011 03:58:23PM 0 points [-]

No, the premise of the Mary argument is that Mary has all possible book-larnin' or third person knowledge. She is specifically not supposed to be pre-equipped with experiential knowledge, which means her brain is in one of the physical states of a brain that has never seen colour.

No, she is not going to be able to instantiate a red quale through her book learning: that is not what is at issue. What is at issue is why she would need to.

Third person knowledge does not essentially change on translation from book to paper to CD, and for that matter it should not essentially change when loaded into a brain. And in most cases, we think it doesn't. We don't think that the knowledge of photosyhtesis means photsynthesising in your head. You share that the qualiaphobes assumption that there is something special about knowledge of qualia that requires instantiation.

Comment author: torekp 15 April 2011 11:38:33PM 1 point [-]

You're converting "physicalism" from a metaphysical thesis to an epistemological one, or at least adding an epistemological one. That's not the usual usage of the term.

Comment author: Peterdjones 16 April 2011 07:10:24PM *  0 points [-]

Since qualia are widely supposed to impact physicalism, and since they don't impact ontological theses such as "everthing is material", then it is likely that people who suppose that way have the descriptive/explanatory/epistemological version in mind, however implictly.

Comment author: bogus 16 April 2011 07:25:07PM *  0 points [-]

I don't understand how Mary's room is supposed to be epistemologically relevant. Supposing that physicalism is true (and that physics is computable, for simplicity) Mary can run a simulation of herself seeing red and know everything that there is to know about her reaction to seeing red, including a comprehensive description of its phenomenology. Yet, she will still lack the subjective experience of seeing red. But this lack has nothing to do with epistemology in the first place.

Comment author: Peterdjones 16 April 2011 07:30:29PM 0 points [-]

It does have something to do with epistemology, because the experience delivers knowledge-by-acquaintance, which is a form of knowledge.

Comment author: Protagoras 15 April 2011 10:17:14PM 0 points [-]

I think this does get at one of the key issues (and one of the places where Hume was probably wrong, and Dennett constitutes genuine progress). On my theory, qualia are not simple. If qualia are by definition simple (perhaps for your reason that they seem that way, and by definition are how they seem), then I am a qualia skeptic. Simple qualia can't exist. But there is independent reason for being skeptical of the idea that phenomenal conscious experiences are as simple as they appear to be. Indeed, Hume gave an example of how problematic it is to trust our intuitions about the simplicity of qualia in his discussion of missing blue, though of course he didn't recognize what the problem really was, and so was unable to solve it.

Comment author: TheAncientGeek 28 September 2016 02:02:22PM 0 points [-]

Given that qualia ere what they appear to be., are you denying that qualia can appear simple, or that they are just appearances?

Comment author: Peterdjones 15 April 2011 02:40:01PM 0 points [-]

As I have already argued, it is not the case that everything is functional or has a functional analysis off the bat: that cannot be assumed apriori. I cannot see the functiona analysis of a blob of chewing gum or a magnetic field. Funcitonal things need well defined inputs, well defined outputs, and a well-defined separation between them and its inner workings,

Since funtionalism is not a universal apriori truth, I see no reason to "codemn to the flames" any non-functional notion of qualia.

I think we know what qualia are because we have them, But that is knowledge-by-acquaintance. It is again question-begging to say that the very idea of qualia has to be rejected unless they can be described. The indescribability of qualia is the essence of the Hard Problem. But we cannot say that we know apriori that only describable things exist.

Comment author: Perplexed 15 April 2011 03:23:17PM 1 point [-]

I think we know what qualia are because we have them.

Unpack this. You know what your qualia are because you have them. I know what my qualia are because I have them. We come to use the same word for these impressions ... why, exactly?

It is again question-begging to say that the very idea of qualia has to be rejected unless they can be described.

What was it Wittgenstein said about remaining silent?

Comment author: Peterdjones 15 April 2011 03:28:53PM 0 points [-]

We also both call our kidneys kidneys. I don't see the big deal.

I didn't realise Witt was 100% correct about everything.

Comment author: Perplexed 15 April 2011 03:57:51PM 1 point [-]

We also both call our kidneys 'kidneys'.

Only because we are able to describe our kidneys.

Comment author: Peterdjones 15 April 2011 04:05:03PM 0 points [-]

I can describe qualia in general as the way thing seem to us. I can't describe them much more specifically than that.

Comment author: Perplexed 15 April 2011 05:09:13PM 0 points [-]

I can describe qualia in general as the way thing seem to us.

I don't believe so. I'll accept that you can describe them as the way things seem to you. Or define them as the way things seem to us. What I am saying is that you cannot convince me that the definition has a definiendum unless you get more specific. Certainly, your intuitions on the significance of that 'seeming' have no argumentative force on anyone else until you offer some explanation why they should know what you are talking about.

Comment author: Perplexed 15 April 2011 03:11:26PM 1 point [-]

I'm talking about things with certain kinds of functionally defined inputs and outputs, a certain kind of computations, in fact. Does this mean I am not talking about qualia as you mean them? If not, then I stand with perplexed; references to qualia should be committed to the flames, for they can contain nothing but sophistry and illusion.

Commit the references to the flames, but not the referees? You are no fun! :) Though since you have invited Hume to join us, I suppose I am satisfied.

Your mention of qualia and functionality in the same paragraph caught my attention. Yes, indeed. If qualia were not functional, then they could hardly be intersubjective. And if they are functional, why the instinctive appeal of the idea that the inimitable 'essence' of qualia can not be generated by a simulation?

Comment author: Peterdjones 15 April 2011 03:27:02PM 0 points [-]

I don't understand you comment about intersubjectivity. Qualia surely are not intersubjective in the sense of being publically accessible. If you just mean that qualia are broadly the same between people under she same circumstances, then that is given by supervenience, which AFAICS has nothing to do with functionalism.

Comment author: Perplexed 15 April 2011 03:42:02PM *  1 point [-]

I am philosophically unschooled, so I may misunderstand "supervenience". I will take it to mean, roughly, that distinct instances of the same phenomenon will have features in common. Yes, but how do we know we are talking about different instances of the same phenomenon unless they have the same function. Cartoon dialog:

Joe: I feel something.
Mary: I feel something too.
Joe and Mary: We both feel the same way.

One doesn't have to be a very strong skeptic to suspect that that third step was something of a leap. But perhaps less of a leap if what they feel is nausea after eating at the same restaurant.

Comment author: Peterdjones 15 April 2011 04:00:08PM *  0 points [-]

We can say that the qualia will be the same if their supervenience bases are the same, and we can say that if they have the same properties. Non functional things like blobs of chewing gum still have properties.

Comment author: Perplexed 15 April 2011 05:18:05PM 0 points [-]

Non functional things like blobs of chewing gum still have properties.

Yes, and we determine those properties using senses that exist because, in other contexts, their use is functional. Do we have a 'sense' that detects the presence of qualia and apprehends their properties? If we do have such a sense organ, would you care to speculate on its function or lack of function?

Comment author: Peterdjones 15 April 2011 06:52:56PM 0 points [-]

I'm using functional to mean "something that has inputs, outputs, and internal workings", not to mean "something that does something somehow".

I don't think we have such a sense. More importantly, nothing I have said implies it.

Comment author: Perplexed 15 April 2011 07:11:21PM 0 points [-]

Ah! I was using it in the biological sense. As roughly the same as "purpose". (You are, of course, welcome to add as many additional scare quotes as you think necessary to immunize us from the taint of teleology.)

It appears we have been talking past each other. This may be a good place to stop.