Update: you can ignore this post, it's completely wrong, I'm only leaving it up to preserve people's comments. Randallsquared has caught a crucial mistake in my reasoning: consciousness could require physical causality, rather than a property of some snapshot description. This falsifies my Point 3 below.

...

In this unabashedly geek-porn post I want to slightly expand our discussion of consciousness, as defined in the hard problem of consciousness. Don't be scared: no quantum claptrap or "informational system" bullshit.

Point 1. The existence (and maybe degree) of conscious/subjective experiences is an objective question.

Justification: if you feel a human possesses as much consciousness as a rock or the number three, stop reading now. This concludes the "proof by anthropic principle" or "by quantum immortality" for those still reading.

Point 2. It's either possible or impossible in principle to implement consciousness on a Turing-equivalent digital computer.

Justification: obvious corollary of Point 1.

Point 3. If consciousness is implementable on a digital computer, all imaginable conscious experiences already exist.

Justification: the state of any program can be encoded as an integer. What does it mean for an integer to "exist"? Does three "exist"? If a computer program gives rise to "actually existing" subjective experiences, then so does the decimal expansion of the x-coordinate of some particle in the Magellanic Cloud when written out in trinary.

Point 4. If consciousness is implementable, the Simulation Argument is invalid and Pascal's Mugging is almost certainly invalid.

Justification: obvious corollary of Point 3.

Point 5. If consciousness is non-implementable, the Simulation Argument and Robin's uploads scenario lose much of their punch.

Justification: the extinction threat in SA and the upload transition only feel urgent due to our current rapid progress with digital computers. We don't yet have a computer peripheral for providing programs with a feeling of non-implementable consciousness.

Point 6. If consciousness could be implementable, Eliezer had better account for it when designing his FAI.

Justification: there's no telling what the FAI will do when it realizes that actual humans have no privileged status over imaginable humans, or alternatively that they do and torturing simulated humans carries no moral weight.

Point 7. The implementability of currently known physics gives strong evidence that consciousness is implementable.

Justification: pretty obvious. Neurons have been called "essentially classical objects", not even quantum.

Point 8. The fact that evolution gave us conscious brains rather than "dumb computers" gives weak evidence that consciousness is non-implementable.

Justification: we currently know of no reason why organisms would need implementable consciousness, whereas using a natural phenomenon of non-implementable consciousness could give brains extra computational power.

Any disagreements? Anything else interesting in this line of inquiry?

New to LessWrong?

New Comment
45 comments, sorted by Click to highlight new comments since: Today at 3:07 AM

This post is an artifact of highly concentrated confusion, borrowing from many poorly-understood ideas. Any useful analysis needs to focus on one mistake on at a time, building on what's clear.

Just pick the first mistake you see and focus on that.

While I agree in this case, your criticism as stated is so vague that it could be made of any idea, and the only defense would be to ask specifically what confusion you refer to, which ideas are poorly understood. It is only in considering the deeper details that we can separate ideas which deserve this criticism from those which don't. As your comment stands, it does not help cousin_it to understand his mistakes if you are correct, nor can he respond in a way to reveal your mistakes if you are wrong.

While I agree in this case, your criticism as stated is so vague that it could be made of any idea

Yet is wasn't made of any idea, and if you agree in this case, you deem it appropriate for this case, in which it was made. It is a fairly general principle: don't try to juggle a thousand angry cats at once, human mind is not that strong.

Vladimir - "concentrated confusion", "a thousand angry cats": that's exactly the kind of spice that your earlier post needed! :-)

Also fewer function words...

But if I did not already agree, your comment would not have convinced me.

Essentially, I think it would be appropriate if it were actually supported, not just supportable.

But if I did not already agree, your comment would not have convinced me.

Right. It wasn't an argument, it was a concept intended to give clearer structure to your own impression. Epistemic rationality, for example, is targeted at giving you accurate estimates, whatever they are, even though it's not domain-specific and it doesn't argue for specific positions on specific questions.

[-]Rune15y00

Vladimir_Nesov says it brilliantly. That's exactly what I felt, and was unable to put down in words eloquently.

Upvoted for calling your own post "completely wrong"!

This discussion, for the most part, exceeds my education and/or abstract thinking ability. To bring the discussion down to my level in at least one thread... a common sense question:

Why are we debating whether consciousness can be simulated? Obviously it can, as it's been simulated in us. (It's also rather robust, as it works well in the vast majority of people -- even in the presence of many genetic physical deviations and despite other cognitive abnormalities and despite being switched on and off while we're sleeping.) The question is how and, perhaps -- though I don't see any reason to be pessimistic, if we as humans are capable of really understanding it.

I'm not debating whether consciousness can be simulated, because we don't have conclusive evidence to answer that question. (Did many commenters really think I was leaning one way or the other? It might explain the reactions.) I'm just exploring what would logically follow from consciousness being Turing-simulable, and what would follow if not.

I'm just exploring what would logically follow from consciousness being Turing-simulable, and what would follow if not.

I see. It's been a long time since I critically examined the relationship between what is true in a Turing simulation is what is true in reality ... but I'm beginning to remember what those questions were about. (Is reality discrete? Countable? etc.) If you just assume that reality is a Turing simulation for all intents and purposes, like I do most of the time, then the discussion didn't make sense.

Point 2. It's either possible or impossible in principle to implement consciousness on a >Turing-equivalent digital computer.

Justification: obvious corollary of Point 1.

You need the law of the excluded middle as well here. It might be undecidable whether any thing is an implementation of consciousness, from the outside. Due perhaps to the self-referential nature of a consciousness studying consciousness.

My argument doesn't require decidability from the outside, only that the presence of consciousness is a yes/no question from the inside.

Well, it certainly can't be a no question from the inside, at least.

The whole argument isn't about decidability at all, whether internal or external. And if you consider the existence of subjective experience to be neither true nor false in some cases, you fail Point 1.

Point 1 accepts the possibility that consciousness may not be binary, but could instead have degrees. Point 2 fudges this. Point 3 then assumes that it's binary.

Thanks, you have found a mistake in the post. But it seems minor to me because the whole reasoning from Point 2 on can be applied to a specific degree of consciousness, e.g. the waterline of neurologically intact humans.

If consciousness is implementable on a digital computer, all imaginable conscious experiences already exist.

Justification: the state of any program can be encoded as an integer. What does it mean for an integer to "exist"? Does three "exist"? If a computer program gives rise to "actually existing" subjective experiences, then so does the decimal expansion of the x-coordinate of some particle in the Magellanic Cloud when written out in trinary.

That is not what the word "exist" means. You claim that everything imaginable exists; I say that there are imaginable things which do not exist, and as proof, I point to the flying spaghetti monster. Note that there is nothing in this argument which is specific to conscious experiences, or which depends on consciousness being emulatable; if you define existence in such a way that everything which can be represented as an integer must exist, and the universe is discrete (as we suspect it is), then every possible arrangement of particles in the universe "exists".

I think you didn't understand the argument. I never took it for an axiom that everything imaginable "exists".

Point 1 says that conscious observers "exist". Whatever that means. That having or not having subjective experience is an objective yes/no question. (That's what is specific to conscious experiences in my argument.)

Point 3 says that, if something simulated by a computer possesses subjective experience (yes/no question!), then something encoded in the decimal expansion of some particle's coordinate possesses the same subjective experience. Because the word "simulation" is ill-defined: we can't produce a hard criterion which detects whether a specific part of our world "simulates" a specific integer-state of a specific ideal Turing machine, and simultaneously excludes "non-obvious" simulation interpretations.

something encoded in the decimal expansion of some particle's coordinate possesses the same subjective experience

No, it doesn't. First of all, the fact that the state of a computer can be represented as a number does not mean that any particular number is a representation of a computer. Especially if that number is obviously something else, like the position of a particle.

Second, you have snuck in two major assumptions: that the universe contains infinitely many particles, and that every particle's coordinates contain an infinite amount of information. Neither of these is a settled question in physics, and I can easily assert that the universe is finite and discrete.

Now I think I don't get your objection. What does it mean for a number to "be something"? What if we stop the computer and run a single step of the simulation with pen and paper? And for the next step, run an unrelated calculation that searches for it within the digits of pi. Where exactly do you draw the line at which the conscious things inside the algorithm suddenly stop being conscious?

Whether a number constitutes a simulation depends on its color, not its value.

Of course not the value! But according to you, some physical law must exist that says aah, here's a simulation of this particular Turing machine, so the beings inside it are now conscious. How might this law do its work? Does it look at me intently as I sit there rewriting stacks of ones and zeroes in the candlelight, running a simulation unbeknownst to myself?

At this point I can't make up my mind whether something ought to "click" for me or for you.

Of course not the value! But according to you, some physical law must exist that says aah, here's a simulation of this particular Turing machine, so the beings inside it are now conscious.

I never said that. But if insist, I can think of at least one such law which would have that consequence: a number cannot constitute consciousness unless something in the universe exists which uniquely specifies that number. According to this rule, writing down complete the state of a conscious Turing machine would make it exist, but writing the words "a conscious Turing machine" or "the set of all integers" would not.

Gotcha.

What counts as "something in the universe that uniquely specifies a number"? I take it a particle's coordinate written out in inches in trinary doesn't count (why?), even if the universe is continuous. But the contents of a PC's memory - if Nature assumes a certain voltage means 1 and another means 0 - do count for some reason. Okaaay, let's pick a border case: a computer calculating successive digits of pi. Will it make every possible world with its conscious inhabitants suddenly spring into subjective life if we wait long enough? Should a span of digits count if it specifies a world when inverted? What if its square specifies a world? How about an off-by-one error? We could go on.

Those aren't just nitpicks; most any rule you can think up is going to have the same problems. I confess to seeing no logical way out except to say that all "abstract" concepts like numbers or algorithms must be either equally real or equally unreal for purposes of creating "real" things like subjective experience.

The encoding of a conscious state is not consciousness. Consciousness is the process of successive causally-related conscious states.

What?? How do you know?

Anyway, if consciousness isn't a "state" but a "process" or "causal" whatever, the whole argument still stands. It doesn't depend one whit on what consciousness is. I just evaluate two possibilities without giving favor to either: either an algorithm can give rise to consciousness, or it can't.

How do I know what? Defining consciousness this way makes things clearer and easier to discuss, but doesn't actually explain consciousness in any way. I'm advocating a definition, not proving a fact.

You start out talking about algorithms, as you say, but then switch to talking about states of (or produced by) algorithms. A MS Word document is not the instance of MS Word that produced it. [Edit: bad example. Reworded: a snapshot of the state of a computer running MS Word is not, itself, a running instance of MS Word. That's a more precise analogy, but unfortunately more debatable. ;)]

I don't have any objection to the idea that an algorithm can give rise to (I would say "be") consciousness. I do object to the idea that numbers exist in the same sense that matter and energy exist. I am not a Platonist.

Thanks! Upon some consideration this makes sense, seems to be correct and turns my whole post into nonsense. Namely, consciousness could require physical causality, which falsifies point 3 while keeping simulations possible. Updated the post.

What?? How do you know?

Because it's obvious.

(Sorry, couldn't resist...)

[-][anonymous]15y00

Not really an objection: a single number could encode more successive causally related states than your whole life contains.

Point 1. The existence (and maybe degree) of conscious/subjective experiences is an objective question.

Justification: if you feel a human possesses as much consciousness as a rock or the number three, stop reading now. This concludes the "proof by anthropic principle" or "by quantum immortality" for those still reading.

I don't follow this justification, and I'm not sure where "stop reading now" gets you. Also, there are definitely panpsychists out there (Ben Goertzel and Luciano Floridi come to mind) and I'm not sure why your discussion must exclude them.

Please note the mention of the number three. Do panpsychists think it possesses consciousness just like us? If not, I'm not excluding them.

But anyway, the post now seems to me completely wrong - see the update at the top.

Justification usually requires more than "It's obvious!" That is the whole point of justification.

Well, that's not entirely true. If someone demanded a justification and your response was "It's obvious" then you've done something wrong. But in absence of such a demand, it's a perfectly fine justification. You have to start from somewhere.

You have to start from somewhere.

Somewhere is not "It's obvious!" 2 + 2 = 4 is not obvious. It is a definition. (Edit) Yeah, uh, theorem.

But I see and agree with your point. I am merely arguing semantics now, which is not particularly useful.

I do stand by my original claim, however, that justification usually requires more than "It's obvious!"

2 + 2 = 4 is not obvious. It is a definition.

Strictly speaking, it is a theorem (obvious one). 4=3+1 is a definition.

[-][anonymous]15y00

I thought "4 comes after 3" was a definition and "3+1 = 4" was a theorem.

Which instances of "obvious" in the text felt non-obvious to you?

It doesn't matter if they felt obvious or non-obvious. Obviousness is not justification, it is an opinion about its accessibility. To be fair to you, I am mostly annoyed by that word alone. I could be way off the mark here in terms of common opinion.

You're not the only one annoyed by that word.

As my first real analysis professor was fond of saying, "If it's obvious, prove it!"

Indeed. When I was a student, I often found myself telling my classmates in math classes, "Just because it is obvious does not mean it is true." It was amazing how many "obvious" conclusions we were able to disprove.

Man, this is why I don't post articles, I'm afraid I'll end up looking like cousin_it.

If you feel you have something worthwhile to post, do it. Being proved wrong - as I am now - is supposed to be a win for you. Evaluate your prospective post as making our group blog better or worse off, not as presenting you in a favorable or unfavorable light.