torekp comments on Three consistent positions for computationalists - Less Wrong

5 Post author: dfranke 14 April 2011 01:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (176)

You are viewing a single comment's thread. Show more comments above.

Comment author: lessdazed 15 April 2011 03:09:48AM 1 point [-]

I have no objection to this position. However, it does not imply substrate independence, and strongly suggests its negation.

I disagree, and think that in any case substrate independence is of two types. The directions are: replacing basic units with complex units and replacing complex units with other complex units. Replacing basic units with complex units that do the same thing the basic unit did preserves equations that treated the basic unit as basic. I will attempt to explain.

Consciousness is presumably not a unique property of one specific system. If you've been conscious over the course of reading this sentence, multiple physical patterns have been conscious. I am quite different than I was ten years ago and am also quite different than my grandmother and someone living in an uncontacted tribe, also conscious beings. If all humans are conscious, no line between consciousness and non-consciousness will be found within the range of human brain variation.

Whole brains, complex things, can be replaced with giant lookup tables, different complex things, and not have consciousness. The output of "Yes" as an answer to a specific question may be identical between the systems, but the internal computations are different, so it is logically possible that the new computations are not within the wide realm of computations that produce consciousness.

Above I was referring to replacing complex biological units with complex mechanical units, in which "substrate independence" will depend on the specifics of the replacement done. However, all replacement of a unit that is basic with a more complicated unit that will give the same output for each input will leave the conscious system intact as the old equations will not be altered.

For example: suppose that a mechanical system of gears and pulleys produces knives (or consciousness) and clanks. It is possible to replace a gear with a sub-system consisting of: a set of range finders, a computer, mechanical hands, and speakers. The sub-system can measure what surrounding gears are doing and use the hands to spin gears as if the missing gear were in place, and use the speakers to make noises as if the old gear was in place.

Everything produced by the old system will also be produced my the new system, though the new system may also produce something else, such as GTA on the computer. This is because we replaced a basic unit with a more complicated system that produces additional things.

Similarly, replacing biological cells with analogously functional mechanical cells should certainly preserve consciousness. Probably, but not by logical necessity, cells are not needed to produce consciousness as a computed output.

tl;dr: computationalism implies substrate independence insofar as anything upon which computations act may be replaced by anything of any form, with the only requirement being to give the same outputs as the old unit would have. Anything a computation uses by mapping it first may be replaced by anything that would be identically mapped.

Comment author: torekp 16 April 2011 12:04:39AM 1 point [-]

Agreed that "replacing biological cells with analogously functional mechanical cells should certainly preserve consciousness," but this is a very limited sort of substrate "independence". This approach makes the difficulty of producing an AI with consciousness-as-we-know-it much more severe. Evolution finds local optima, while intelligent design is more flexible, so I expect AI to take off much faster and more successfully, at some point, in a different direction, rather than brain emulation.

Like dfranke, I favor option #2, but like peterdjones, I don't think it fits under "computationalism".