Comment author: cousin_it 14 April 2011 01:52:23PM *  13 points [-]

Hmm. My comment is the most highly upvoted response to your survey at the moment, and the second highest upvoted one is by XiXiDu expressing basically the same position as mine, but I don't see it on your list. Here's a summary: we don't yet have enough insight to choose any specific answer or even to know if we're asking the right questions. We're facing an unsolved scientific problem. The wisdom of crowds doesn't apply here. If no one has yet discovered Maxwell's equations or Watson and Crick's double helix, no amount of surveying can lead you to the right answer. You have to do, like, actual math and physics and biology and stuff.

Comment author: dfranke 14 April 2011 02:52:51PM 3 points [-]

I didn't list this position because it's out of scope for the topic I'm addressing. I'm not trying to address every position on the simulation hypothesis; I'm trying to address computationalist positions. If you think we are completely in the dark on the matter, you can't be endorsing computationalists, who claim to know something.

Comment author: Perplexed 14 April 2011 02:15:41PM 3 points [-]

We're facing an unsolved scientific problem. You can't solve it by survey.

Interesting, particularly in light of the recent "What is analytic philosophy, that we should be mindful of it?" discussions. It almost seems that dfranke, taking the philosopher's role, should respond: "We are facing an unsolved philosophical problem. You can't contribute to the solution without taking a position."

Comment author: dfranke 14 April 2011 02:50:24PM 0 points [-]

I agree, and furthermore this is a true statement regardless of whether you classify the problem as philosophical or scientific. You can't do science without picking some hypotheses to test.

Three consistent positions for computationalists

5 dfranke 14 April 2011 01:15PM

Yesterday, as a followup to We are not living in a simulation, I posted Eight questions for computationalists in order to obtain a better idea of what exactly my computationalist critics were arguing.  These were the questions I asked:

  1. As it is used in the sentence "consciousness is really just computation", is computation:
    a) Something that an abstract machine does, as in "No oracle Turing machine can compute a decision to its own halting problem"?
    b) Something that a concrete machine does, as in "My calculator computed 2+2"?
    c) Or, is this distinction nonsensical or irrelevant?
  2. If you answered "a" or "c" to question 1: is there any particular model, or particular class of models, of computation, such as Turing machines, register machines, lambda calculus, etc., that needs to be used in order to explain what makes us conscious? Or, is any Turing-equivalent model equally valid?
  3. If you answered "b" or "c" to question 1: unpack what "the machine computed 2+2" means. What is that saying about the physical state of the machine before, during, and after the computation?
  4. Are you able to make any sense of the concept of "computing red"? If so, what does this mean?
  5. As far as consciousness goes, what matters in a computation: functions, or algorithms? That is, does any computation that give the same outputs for the same inputs feel the same from the inside (this is the "functions" answer), or do the intermediate steps matter (this is the "algorithms" answer)?
  6. Would an axiomatization (as opposed to a complete exposition of the implications of these axioms) of a Theory of Everything that can explain consciousness include definitions of any computational devices, such as "and gate"?
  7. Would an axiomatization of a Theory of Everything that can explain consciousness mention qualia?
  8. Are all computations in some sense conscious, or only certain kinds?

I got some interesting answers to these questions, and from them I can extract three distinct positions that seem consistent to me.

Consistent Position #1: Qualia skepticism

Perplexed asserted this position in no uncertain terms.  Here's my unpacking of it:

"Qualia do not exist. The things that you're confused about and are mistaking for qualia can be made clear to you using an argument phrased in terms of computation.  When you talk about consciousness, I think I can understand your meaning, but you aren't referring to anything fundamental or particularly well defined: it's an unnatural category."

The internal logic of the qualia skeptic's position makes sense to me, and I can't really respond to it other than by expressing personal incredulity. To me, the empirical evidence in support of the existence of qualia is so clear and so immediate that I can't figure out what you're not seeing so that I can point to it.  However, I shouldn't need to bring you to your senses (literally!) on this in order to convince you to reject Bostrom's simulation argument, albeit on grounds completely different than any I've argued so far.  If you don't buy that there's anything fundamental behind consciousness, then you also shouldn't buy Bostrom's anthropic reasoning in which he conjures up the reference class of "observers with human-type experiences"; elsewhere he refers to "conscious experience" and "subjective experience" without implication that he means anything more specific. That's taking an unnatural category and invoking it magically. In the statement that we are something selected with uniform probability from that group, how do you make sense of "are"?

Consistent Position #2: Computation is implicit in physics

This position is my best attempt at a synthesis of what TheOtherDave, lessdazed, and prase are getting at. It's compatible with position #1, but neither one entails the other.

To understand this position, it is helpful, but not necessary, to define the laws of physics in terms of something like a cellular automaton. Each application of the automaton's update rule can be understood as a primitive operation in a computation. When you apply the update rule repeatedly on cells nearby each other, you're building up a more complex computation. So, "consciousness is just computation" is equivalent in meaning, essentially, to "consciousness is just physics".

This position more-or-less necessitates answering "algorithms" to question #5, or if not that then at least something similar to RobinZ's answer. If you say "functions" then you at least need to explain how to reify the concepts of "input" and "output". You can pull this off by saying that the update rules are the functions, the inputs are the state before the rule application, and the outputs are the state afterward. Any other answer probably means you're taking something closer or identical to position #3 which I'll address next. This comment by peterdjones and his followups to it provide a (Searlesque) intuition pump showing other reasons why a "functions" reply is problematic.

I have no objection to this position. However, it does not imply substrate independence, and strongly suggests its negation. If your algorithmic primitives are defined at the level of individual update-rule applications, then any change whatsoever to an object's physical structure is a change to the algorithm that it embodies. If you accept position #2 while rejecting position #1, then you may actually be making the same argument that I am, merely in different vocabulary.

Consistent Position #3: Computation is reified by physics

I was both shocked and pleased to see zaph's answer to question #6, because it bites a bullet that I never believed anyone would bite: that there is actually something fundamental in the laws of physics which defines and reifies the concept of computation in a substrate-independent fashion. I can't find any inconsistency in this, but I think we have good reason to consider it extremely implausible. In the language of physics which is familiar to us and has served us well — the language whose vocabulary consists of things like "particle" and "force" and "Hilbert space" — the Kolmogorov complexity of a definition of an equivalence relation which tells us that an AND gate implemented in a MOSFET is equivalent to an AND gate implemented in a neuron is equivalent to an AND gate implemented in desert rocks, but is not equivalent to an OR gate implemented in any of those media — is enormous. Therefore, Solomonoff induction tells us that we should assign vanishingly low probability to such a hypothesis.

 

I hope that I've fairly represented the views of at least a majority of computationalists on LW. If you think there's another position available, or if you're one of the people I've called out by name and you think I've pigeonholed you incorrectly, please explain yourself.

Comment author: Giles 13 April 2011 04:44:20PM *  3 points [-]

I feel that dfranke's questions make all kinds of implicit assumptions about the reader's worldview which makes them difficult for most computationalists to answer. I've prepared a different list - I'm not really interested in answers, just an opinion as to whether they're reasonable questions to ask people or whether they only make sense to me.

But you can answer them if you like.

For probability estimates, I'm talking about subjective probability. If you believe it doesn't make sense to give a probability, try answering as a yes/no question and then guess the probability that your reasoning is flawed.

1: Which of these concepts are at least somewhat meaningful?

a) consciousness

b) qualia

2: Do you believe that an agent is conscious if and only if it experiences qualia?

3: Are qualia epiphenomenal?

4: If yes:

a) Would you agree that there is no causal connection between the things we say about qualia and the actual qualia we experience?

b) Are there two kinds of qualia: the ones we talk about and the ones we actually experience?

5: Is it possible to build a computer simulation of a human to any required degree of accuracy?

a) If you did, what is the probability that simulation would be conscious/experience qualia?

b) Would this probability depend on how the simulation is constructed or implemented?

6: What is the probability that we are living in a simulation?

a) If you prefer to talk about how much "measure of our existence" comes from simulations, give that instead

7: What is the probability that a Theory of Everything would explain consciousness?

8: Would you agree that it makes sense to describe a universe as "real" if and only if it contains conscious observers?

9: Suppose the universe that we see can be described completely by a particular initial state and evolution rule. Suppose also for the sake of simplicity that we're not in a simulation.

a) What is the probability that our universe is the only "real" one?

b) What is the probability that all such describable universes are "real"?

c) If they are all "real", are they all equally real or does each get a different "measure"? How is that measure determined?

d) Are simulated universes "real"? How much measure do they inherit from their parent universe?

10: Are fictional universes "real"? Do they contain conscious observers? (Or give a probability)

a) If you answered "no" here but answered "yes" for simulated universes, explain what makes the simulation special and the fiction not.

11: Is this entire survey nonsense?

Comment author: dfranke 13 April 2011 06:12:49PM *  2 points [-]

I'll save my defense of these answers for my next post, but here are my answers:

  1. Both of them.
  2. Yes. The way I understand these words, this is a tautology.
  3. No. Actually, hell no.
  4. N/A
  5. Yes; a. I'm not quite sure how to make sense of "probability" here, but something strictly between 0 and 1; b. Yes.
  6. Negligibly larger than 0.
  7. 1, tautologically.
  8. For the purposes of this discussion, "No". In an unrelated discussion about epistemology, "No, with caveats."
  9. This question is nonsense.
  10. No.
  11. If I answered "yes" to this, it would imply that I did not think question 11 was nonsense, leading to contradiction.
Comment author: zaph 13 April 2011 01:29:24PM *  2 points [-]

I would describe myself as a computationalist by default, in that I can't come up with an ironclad argument against it. So, here are my stabs:

1) I'm not sure what you mean by an abstract machine (and please excuse me if that's a formal term). Is that a potential or theoretical machine? That's how I'm reading it. If that's the case, I would say that CIRJC means both a and b. It's a computation of an extremely sophisticated algorithm, the way 2 + 2 = 4 is the computation of a "simple" one (that still needs something really big like math to execute).

2) I don't know if there needs to be a particular class of models; do you mean we know in advance what the particular human consciousness model is? I'd probably say we'd need several models operating in parallel, and that set would be the "human consciousness model".

3) To me, that just means that a simple state machine took in an input, executed some steps, and provided an output on a screen. There was some change of register positions via electricity.

4) Computing red: here's where qualia is going to make things messy. In a video game, I don't have any problem imagine someone issuing a command to a Sim to "move the red box" and the Sim would do so. That's all computation (I don't think there's "really" a Sim or a red box for that matter living in my TV set), but the video game executed what I was picturing in my head via internal qualia. So it seems like there would be an approximation of "computing" red.

5) I don't have any problem saying the algorithm would be very important. I can put this in completely human terms. A psychopath can perfectly imitate emotions, and enact the exact same behavioral output as someone else in similar circumstances. The internal algorithm, if you will, is extremely different however.

6) I would say this is an emphatic yes. Neurons, for instance, serve as some sort of gate analog.

7) I think it would mention qualia, in as much as people would ask about it (so there would at least be enough of an explanation to explain it away, so to speak).

8) I don't think computations are conscious in and of themselves. If I'm doing math in notebook, I don't think the equations are conscious. I don't think the circuitry of a calculator or a computer are conscious. That said, I don't think individual cells of my brain are conscious, and if you were to remove portion of a person's brain (surgery for cancer, for example) that those portions remain conscious, or that person is less conscious by the percentage of tissue removed. Consciousness, to me, may be algorithmically based, but is still the awareness of self, history, etc. that makes humans human. Saying CIRJC doesn't remove the complexity of the calculation.

I haven't read that other thread; can I ask what your opinions are? Briefly of course, and while I can't speak for everyone else, I promise to read them as thumbnails and not absolute statements to be used against you. You could point to writers (Searle? Penrose?) if you like.

Comment author: dfranke 13 April 2011 04:36:57PM 0 points [-]

I haven't read that other thread; can I ask what your opinions are? Briefly of course, and while I can't speak for everyone else, I promise to read them as thumbnails and not absolute statements to be used against you. You could point to writers (Searle? Penrose?) if you like.

Searle, to a zeroth approximation. His claims need some surgical repair, but you can do that surgery without killing the patient. See my original post for some "first aid".

Comment author: lessdazed 13 April 2011 02:16:39PM *  1 point [-]

1) I don't know. I also think there is a big difference between c) "nonsensical" and c) "irrelevant". To me, "irrelevant" means all possible worlds are instantiated, and those also computed by machines within such worlds are unfathomably thicker.

2) I don't know.

3) Probably causation between before and after is important, because I doubt a single time slice has any experience due to the locality of physics.

4) Traditionally I go point at things, a stop sign, a fire truck, and apple, and say "red" each time. Then I point at the grass and sky and say "not red". Red is a relational property within the system of: me plus the object. Each part of the system can in principle be replaced by a different, potentially Rube-Goldberg part with identical output without affecting the rest of the system. The computation is the part inside my brain. Whether the stop sign is real or I am blind and my nervous system is being stimulated by mad scientists makes no difference in that respect.

5) In the red system consisting of me and the stop sign, generally the stuff outside my skull can be replaced by functions, the inside stuff needs specific algorithms to produce sensations.

6) Note to self: when giving a list of questions, include something that doesn't actually mean anything and see what the answers to it are like. My best guess is that you're not doing that, but I have no idea what this means.

7) Why would it have to? Meaning no, any patterns larger than the smallest are explained by their components.

8) I can't think of any output that in principle couldn't be produced by a conscious computational process. But not all computational processes are conscious.

Comment author: dfranke 13 April 2011 04:02:35PM 0 points [-]

I also think there is a big difference between c) "nonsensical" and c) "irrelevant".

I didn't mean to imply otherwise. I meant the "or" there as a logical inclusive or, not a claim of synonymy.

Comment author: zaph 13 April 2011 01:29:24PM *  2 points [-]

I would describe myself as a computationalist by default, in that I can't come up with an ironclad argument against it. So, here are my stabs:

1) I'm not sure what you mean by an abstract machine (and please excuse me if that's a formal term). Is that a potential or theoretical machine? That's how I'm reading it. If that's the case, I would say that CIRJC means both a and b. It's a computation of an extremely sophisticated algorithm, the way 2 + 2 = 4 is the computation of a "simple" one (that still needs something really big like math to execute).

2) I don't know if there needs to be a particular class of models; do you mean we know in advance what the particular human consciousness model is? I'd probably say we'd need several models operating in parallel, and that set would be the "human consciousness model".

3) To me, that just means that a simple state machine took in an input, executed some steps, and provided an output on a screen. There was some change of register positions via electricity.

4) Computing red: here's where qualia is going to make things messy. In a video game, I don't have any problem imagine someone issuing a command to a Sim to "move the red box" and the Sim would do so. That's all computation (I don't think there's "really" a Sim or a red box for that matter living in my TV set), but the video game executed what I was picturing in my head via internal qualia. So it seems like there would be an approximation of "computing" red.

5) I don't have any problem saying the algorithm would be very important. I can put this in completely human terms. A psychopath can perfectly imitate emotions, and enact the exact same behavioral output as someone else in similar circumstances. The internal algorithm, if you will, is extremely different however.

6) I would say this is an emphatic yes. Neurons, for instance, serve as some sort of gate analog.

7) I think it would mention qualia, in as much as people would ask about it (so there would at least be enough of an explanation to explain it away, so to speak).

8) I don't think computations are conscious in and of themselves. If I'm doing math in notebook, I don't think the equations are conscious. I don't think the circuitry of a calculator or a computer are conscious. That said, I don't think individual cells of my brain are conscious, and if you were to remove portion of a person's brain (surgery for cancer, for example) that those portions remain conscious, or that person is less conscious by the percentage of tissue removed. Consciousness, to me, may be algorithmically based, but is still the awareness of self, history, etc. that makes humans human. Saying CIRJC doesn't remove the complexity of the calculation.

I haven't read that other thread; can I ask what your opinions are? Briefly of course, and while I can't speak for everyone else, I promise to read them as thumbnails and not absolute statements to be used against you. You could point to writers (Searle? Penrose?) if you like.

Comment author: dfranke 13 April 2011 03:58:16PM 0 points [-]

I'm not sure what you mean by an abstract machine (and please excuse me if that's a formal term)

I'd certainly regard anything defined within the framework of automata theory as an abstract machine. I'd probably accept substitution of a broader definition.

Comment author: Kevin 13 April 2011 09:31:35AM 6 points [-]

Yes, dfranke's argument seems to map to "we are not living in a simulation because we are not zombies and people living in a simulation are zombies".

Comment author: dfranke 13 April 2011 03:29:07PM *  0 points [-]

s/are not zombies/have qualia/ and you'll get a little more accurate. A zombie, supposing such a thing is possible (which I doubt for all the reasons given in http://lesswrong.com/lw/p7/zombies_zombies ), is still a real, physical object. The objects of a simulation don't even rise to zombie status.

Comment author: ArisKatsaris 13 April 2011 08:28:01AM *  2 points [-]

dfranke means, I think, that he considers being in a simulation possible, but not likely.

Statement A) "We are not living in a simulation": P(living in a simulation) < 50%

Statement B) "We cannot be in a simulation": P(living in a simulation) ~= 0%

dfranke believes A, but not B.

Comment author: dfranke 13 April 2011 02:58:02PM *  3 points [-]

No, rather:

A) "We are not living in a simulation" = P(living in a simulation) < ε.

B) "we cannot be living in a simulation" = P(living in a simulation) = 0.

I believe A but not B. Think of it analogously to weak vs. strong atheism. I'm a weak atheist with respect to both simulations and God.

Comment author: Yvain 13 April 2011 10:34:34AM *  13 points [-]

Your position within our universe is giving you a bias toward one side of a mostly symmetrical situation.

Let's throw out the terms "real" and "simulated" universe and call them the "parent" and "child" universe.

Gravity in the child universe doesn't affect the parent universe, true; creating a simulation of a black hole doesn't suck the simulating computer into the event horizon. But gravity in the parent universe doesn't affect the child universe either - if I turn my computer upside-down while playing SimCity, it doesn't make my Sims scream and start falling into the sky as their city collapses around them. So instead of saying "simulated gravity isn't real because it can't affect the real universe", we say "both the parent and child universes have gravity that only acts within their own universe, rather than affecting the other."

Likewise, when you say that you can't point to the location of a gravitional force within the simulation so it must be "nowhere" - balderdash. The gravitational force that's holding Sim #13335 to the ground in my SimCity game is happening on Oak Street, right between the park and the corporate tower. When discussing a child-universe gravitational force, it is only necessary to show it has a location within the child-universe. For you to say it "doesn't exist" because you can't localize it in your universe is as parochial as for one of my Sims to say you don't exist because he's combed the entire city from north to south and he hasn't found any specific location with a person named "dfranke".

Comment author: dfranke 13 April 2011 01:15:02PM 1 point [-]

The claim that the simulated universe is real even though its physics are independent of our own seem to imply a very broad definition of "real" that comes close to Tegmarck IV. I've posted a followup to my article to the discussion section: Eight questions for computationalists. Please to reply to it so I can better understand your position.

View more: Prev | Next