If such official, at-the-edge outputs are all that matters for computationalism, then dumb-but-fast Look Up Tables could be conscious, which is a problem.
That's not what I claimed, in fact, I was trying to be careful to discredit that. I said the system can be arbitrarily divided, and replacing any part with a different part/black box that gives the same outputs as the original would have would not affect the rest of the system. Some patterns of replacement of parts remove the conscious parts. Some do not.
This is important because I am trying to establish "red" and other phenomena as relational properties of a system containing both me and a red object. This is something that I think distinguishes my answer from others'.
I'm distinguishing further between removing my eyes and the red object and replacing them with a black box sending inputs into my optic nerves, which preserves consciousness, and replacing my brain with a black box lookup table and keeping my eyes and the object intact, which removes the conscious subsystem of the larger system. Note that some form of the larger system is a requirement for seeing red.
My answer highlights how only some parts of the conscious system are necessary for the output we cal consciousness, and makes sure we don't confuse ourselves and think that all elements of the conscious computing system are essential to consciousness, or that all may be replaced.
The algorithm is sensitive to certain replacement of its parts with functions, but not others.
This post is a followup to "We are not living in a simulation" and intended to help me (and you) better understand the claims of those who took a computationalist position in that thread. The questions below are aimed at you if you think the following statement both a) makes sense, and b) is true:
"Consciousness is really just computation"
I've made it no secret that I think this statement is hogwash, but I've done my best to make these questions as non-leading as possible: you should be able to answer them without having to dismantle them first. Of course, I could be wrong, and "the question is confused" is always a valid answer. So is "I don't know".
a) Something that an abstract machine does, as in "No oracle Turing machine can compute a decision to its own halting problem"?
b) Something that a concrete machine does, as in "My calculator computed 2+2"?
c) Or, is this distinction nonsensical or irrelevant?
ETA: By the way, I probably won't engage right away with individual commenters on this thread except to answer requests for clarification. In a few days I'll write another post analyzing the points that are brought up.