I don't know the answer to any of these questions, and I don't know which of them are confused.
Here's a way to make the statement "consciousness is computation" a little less vague, let's call the new version X: "you can simulate a human brain on a fast enough computer, and the simulation will be conscious in the same sense that regular humans are, whatever that means". I'm not completely sure if X is meaningful, but I assign about 80% probability to its being meaningful and true, because current scientific consensus says individual neurons operate in the classical regime, they're too large for quantum effects to be significant.
But even if X turns out to be meaningful and true, I will still have leftover object-level questions about consciousness. In particular, knowing that X is true won't help me solve anthropic problems until I learn more about the laws that govern multiple instantiations of isomorphic conscious thingies, whatever that means. Consciousness could "be" one instantiated computation, or an equivalence class of computations, or an equivalence class plus probability-measure, or something even more weird. I don't believe we can enumerate all the possibilities today, much less choose one.
There is too much vagueness involved here. A better question would be if there is any reason to believe that even though evolution could create consciousness we can not.
No doubt we don't know much about intelligence and consciousness. Do we even know enough to be able to tell that the use of the term "consciousness" makes sense? I don't know. But what I know is that we know a lot about physics and biological evolution and that we know that we are physical and an effect of evolution.
We know a bit less about the relation between evolutionary processes and intelligence but we do know that there is an important difference and that the latter can utilize the former.
Given all that we know, is it reasonable to doubt the possibility that we can create "minds", conscious and intelligent agents? I don't think so.
The distinction doesn't make sense to me. But then neither does the statement "Consciousness is really just computation." The only charitable reading I can give that statement is "Consciousness is really just and, as you will notice, the only really powerful or mysterious component of that system is computation". But even with that clarification, I really don't understand what you are getting at with the a vs b distinction. I get the impression that you attach a lot more importance to the (abstract vs concrete) distinction than I
I think you're completely mistaken about what computationalism claims. It's not that consciousness is a mysterious epiphenomenon of computation-in-general; it's more that we expect consciousness to be fully reducible to specific algorithms. "Consciousness is really just computation" left at that would be immediately rejected as a mysterious answer, fake causality, attempting to explain away what needs only to be explained, and other related mistakes; 'computation' only tells us where we should be looking for an explanation of consciousness, it ca...
Tentatively, my gut reactions are:
I think you will find this paper useful--Daniel Dennett answers some of these questions and explains why he thinks Searle is wrong about consciousness. Pretty much all of the positions Dennett endorses therein are computationalist, so it should help you organize your thoughts.
I feel that dfranke's questions make all kinds of implicit assumptions about the reader's worldview which makes them difficult for most computationalists to answer. I've prepared a different list - I'm not really interested in answers, just an opinion as to whether they're reasonable questions to ask people or whether they only make sense to me.
But you can answer them if you like.
For probability estimates, I'm talking about subjective probability. If you believe it doesn't make sense to give a probability, try answering as a yes/no question and then guess ...
I would describe myself as a computationalist by default, in that I can't come up with an ironclad argument against it. So, here are my stabs:
1) I'm not sure what you mean by an abstract machine (and please excuse me if that's a formal term). Is that a potential or theoretical machine? That's how I'm reading it. If that's the case, I would say that CIRJC means both a and b. It's a computation of an extremely sophisticated algorithm, the way 2 + 2 = 4 is the computation of a "simple" one (that still needs something really big like math to execute)...
I suppose I can consider myself a weak computationalist. I think a computer running a human mind will generate qualia, if it's a simple enough computer. After all, you could interpret a rock in such a way that it's a computer running a human mind.
It's the algorithm that matters.
I'm currently having an exchange with Massimo Pigliucci of Rationally Speaking who might be known here due to his Bloggingheads debate with Eliezer Yudkowsky where he was claiming that "you can simulate the 'logic' of photosynthetic reactions in a computer, but you ain't gonna get sugar as output." I have a hard time to wrap my mind around his line of reasoning, but I'll try:
Let's assume that you wanted to simulate gold. What does it mean to simulate gold?
According to Wikipedia to simulate something means to represent certain key characteristics...
1) I don't know. I also think there is a big difference between c) "nonsensical" and c) "irrelevant". To me, "irrelevant" means all possible worlds are instantiated, and those also computed by machines within such worlds are unfathomably thicker.
2) I don't know.
3) Probably causation between before and after is important, because I doubt a single time slice has any experience due to the locality of physics.
4) Traditionally I go point at things, a stop sign, a fire truck, and apple, and say "red" each time. Then I poin...
Hm. I am not a 100% computationalist, but let me try.
Here is another attempt to rephrase one of the opinions hold within the philosophy camp:
Imagine 3 black boxes, each of them containing a quantum-level emulation of some existing physical system. Two boxes contain the emulations of two different human beings and one box the emulation of an environment.
Assume that if you were to connect all 3 black boxes and observe the behavior of the two humans and their interactions you would be able to verify that the behavior of the humans, including their utterances, would equal that of the originals.
If one was to dis...
(2) Humans can manually compute any algorithm that a TM compute (this is just the Church Turing conjecture in reverse), so a human has to be at least a UTM. The significant part of the computationalist claim is that humans are at most a UTM.
(4) No.
(5) If intermediate steps matter, the Turing Test is invalidate, since a Giant Lookup table could produce the same results with trivial intermediate steps. However, computationalists do not have to subscribe to the TT.
(7) An "And gate" looks like a piece of hardware, but it is really anything that co...
(b) Consciousness is something that a concrete machine does, as in "My calculator computed 2+2".
Instructed to skip.
Unpack what "the machine computed 2+2" means. (I'll try.) A machine computes 2+2 if it has an algorithm (perhaps a subroutine) that accepts two inputs a and b (where a and b are some set of numbers containing at least the natural numbers through 5) and generally (almost always) outputs the sum a+b. The machine may output a+b by any means whatsoever -- even just using a look up table or appending two strings of symbol
My own answers, before reading anyone else's, were:
Okay, here's my answers. Please take note that full answers will be too big, so expect some vagueness:
1) B 3) Big topic For me, It can use result of "computation". 4) Invoking memory or associations? Mostly no. 5) Hard to say yet. I'll take a guess that it's mostly functions, with maybe some parts where steps really matter. 6) I think it's possible. 7) I guess so. 8) They have something in common, but I think it depends on your definition of "conscious". They are most certainly not self-conscious, though.
I think the logic behind this argument is actually much, much simpler.
Let us suppose that consciousness is not a type of computation.
Rational argument, and hence rational description IS a type of computation - it can be made into forms that are computable.
Therefore consciousness, if it is not a type of computation, is also not describeable within, or reducible to, rational argument.
I call this type of thing the para-rational - it's not necessarily against rationality to suppose that something exists which isn't rationally describable. What doesn't make sen...
There is no such thing as an abstract machine, nor an abstract computation. If you imagine a machine adding two and two, the computation is implemented in your brain, which holds a representation of the operations. Physics is information; information is also physics. There is no information without a physical embodiment; there is no computation without physical operations.
Humans don't have infinite memory, and thus are less-powerful than Turing machines.
"Computing red": Please put more words into that phrase. It's too ambiguous to deal
Either "qualia" are ultimately a type of experience that can be communicated to a conscious being who hasn't had the experience, or they cannot. If they can be, they cease to have any distinction from any other communicable fact. If they cannot, you can't actually use them to determine if something is conscious, because nobody can communicate to you their own individual qualia. Either way, qualia by necessity drop out of any theory of consciousness that can classify whether something as inert as a brick is a conscious being or not. And if a theory of consciousness does not predict, either way, whether or not a brick is conscious, then it is a waste of time.
My $0.02, without reading other answers:
\1. I'm not sure, but I lean towards (b).
Unpacking a bit: As it is used in the sentence "the sum of 1 and 1 to yield 2 is a computation", my intuition is that something like (a) is meant. That said, it seems likely that this intuition comes from me reasoning about a category of computations as a cognitive shortcut, and then sloppily reifying the category. Human brains do that a lot. So I'm inclined to discard that intuition and assert that statements about 1+1=2 are statements about an abstract category in...
Would an axiomatization of a Theory of Everything that can explain consciousness mention qualia?
LessWrong User:Mitchell_Porter has made some headway on this very interesting question. See his submissions How to think like a quantum monadologist and Consciousness.
My answers:
Your terminology is confused and the question is ill-formed. There is a difference between mathematically abstract computation and implementation. Implementation usually requires energy to carry out, and (based on concerns around reversible computing) it will always take energy to communicate the output of an implemented computation to some other physical system.
The Church Turing Thesis is probably correct. Moreover, any one of these formalisms can emulate any other with a runtime hit of some constant plus a scalar multiplier.
That a c
This post is a followup to "We are not living in a simulation" and intended to help me (and you) better understand the claims of those who took a computationalist position in that thread. The questions below are aimed at you if you think the following statement both a) makes sense, and b) is true:
"Consciousness is really just computation"
I've made it no secret that I think this statement is hogwash, but I've done my best to make these questions as non-leading as possible: you should be able to answer them without having to dismantle them first. Of course, I could be wrong, and "the question is confused" is always a valid answer. So is "I don't know".
a) Something that an abstract machine does, as in "No oracle Turing machine can compute a decision to its own halting problem"?
b) Something that a concrete machine does, as in "My calculator computed 2+2"?
c) Or, is this distinction nonsensical or irrelevant?
ETA: By the way, I probably won't engage right away with individual commenters on this thread except to answer requests for clarification. In a few days I'll write another post analyzing the points that are brought up.