Comment author: Peterdjones 15 April 2011 03:24:20PM 0 points [-]

To say that the surgery is required is a to say that there is knowledge not conveyed by third persons descriptions, and that is a problem for sweeping claims of physicalism. That is the philosophical problem, it is a problem about how successful science could be.

The other problem, of figuring out what brains do, is a hard problem, but it is not the same, because it is a problem within science.

Comment author: dfranke 15 April 2011 03:32:43PM *  2 points [-]

To say that the surgery is required is to say that there is knowledge not conveyed by third persons descriptions, and that is a problem for sweeping claims of physicalism.

No it isn't. All it says is that the parts of our brain that interpret written language are hooked up to different parts of our hippocampus than our visual cortex is, and that no set of signals on one input port will ever cause the hippocampus to react in the same way that signals on the other port will.

Comment author: Peterdjones 15 April 2011 02:24:58PM *  0 points [-]

The claim that consciousness is fame in the brain, and the claim that qualia are incommunicable because of complexity are somewhat contradictory, because what is made famous in the brain can be subjectively quite simple, but remains incommunicable.

A visual field of pure blue, or a sustained note of C#, is not fundamentally easier to convey than some complex sensation. Whilst there maybe complex subconscious processing and webs of association involved in the production of qualia, qualia can be simple as presented to consciousness. The way qualia seems is the way they are, since they are defined as seemings. And these apparently simple qualia are still incommunicable, so the problem of communicating qualia is not the problem of communicating complexity.

Something that is famous in the brain needs to have a compelling quality, and some qualia, such as pains have that in abundance. However, others do not. The opposite of blindsight — access consciousness without phenomenal consciousness — is phenomenal consciousness without access consciousness, for instance "seeing something out of the corner of ones eye. Not only are qualia not uniformally compelling, but one can have mental content that is compelling, but cognitive rather than phenomenal, for instance an obsession or idée fixe; .

"And if someone did know enough about color to explain all the associations that it has, well, having associations explained to you isn't normally enough for you to make the same associations in the same way yourself, "

To some physicalists, it seems obvious that a physcial description of brain state won't convey what that state is like, because it doesn't put you into that state. Of course, a description of a brain state won't put you into a brain state, any more than a description of photosynthesis will make you photosynthesise. But we do expect that the description of photosynthesis is complete, and actually being able to photosynthesise would not add anything to our knowledge. We don't expect that about experience. We expect that to grasp what the experience is like, you have to have it. If the 3rd-person description told you what the experience was like, explained it experientially, the question of instantiating the brain-state would be redundant. The fact that these physicalists feel it would be in some way necessary means they subscribe to some special, indescribable aspect of experience even in contradiction to the version of physicalism that states that everything can be explained in physicalese.Everything means everything — include some process whereby things seem differnt from the inside than they look from the outide. They still subscribe to the idea that there is a difference between knowledge-by-aquaintance and knowledge-by-description, and that is the distinction that causes the trouble for all-embracing explanatory physicalism.

Weaker forms of physicalism are still posible, however.

"can say that when I've read articles about how echolocation works, and what sorts of things it reveals or conceals, I've felt like I know a tiny bit more about what it's like to be a bat than I did before reading the articles."

But everyone has the experience of suddenly finding out a lot more about something when they experience it themselves. That is what underpins the knowledge-by-acquaintance versus knowledge-by-description distinction.

Comment author: dfranke 15 April 2011 03:10:40PM *  3 points [-]

I think that the "Mary's Room" thought experiment leads our intuitions astray in a direction completely orthogonal to any remotely interesting question. The confusion can be clarified by taking a biological view of what "knowledge" means. When we talk about our "knowledge" of red, what we're talking about is what experiencing the sensation of red did to our hippocampus. In principle, you could perform surgery on Mary's brain that would give her the same kind of memory of red that anyone else has, and given the appropriate technology she could perform the same surgery on herself. However, in the absence of any source of red light, the surgery is required. No amount of simple book study is ever going to impact her brain the same way the surgery would, and this distinction is what leads our intuitions astray. Clarifying this, however, does not bring us any closer to solving the central mystery, which is just what the heck is going on in our brain during the sensation of red.

Comment author: lessdazed 15 April 2011 06:40:43AM 1 point [-]

making the same argument that I am, merely in different vocabulary

I don't necessarily understand your argument. Recall I don't understand one of your questions. I think you disagree with some of my answers to your questions, but you hinted that you don't think my answers are inconsistent. So I'm really not sure what's going on.

If the computer-with-spark-plugs-attached is conscious...do you still consider this confirmation of substrate independence?

Not every substance can perform every sub-part role in a consciousness producing computation, so there's a limit to "independence". Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I'm not sure what you mean.

To me, what is important is to establish that there's nothing magical about bio-goo needed for consciousness, and as far as exactly which possible computers are conscious, I don't know.

If you replace the guy moving the rocks around with a crudely-built robot moving the rocks in the same pattern, do you think it's plausible that anything in that system experiences human-like consciousness?

Plausible? What does that mean, exactly?

Comment author: dfranke 15 April 2011 12:51:55PM *  0 points [-]

Plausible? What does that mean, exactly?

What subjective probability would you assign to it?

Not every substance can perform every sub-part role in a consciousness producing computation, so there's a limit to "independence". Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I'm not sure what you mean.

I don't know what the "usual" point of contention is, but this isn't the one I'm taking a position in opposition to Bostrom on. Look again at my original post and how Bostrom defined substrate-independence and how I paraphrased it. Both Bostrom's definition and mine mean that xkcd's desert and certain Giant Look-Up Tables are conscious.

Comment author: lessdazed 15 April 2011 03:09:48AM 1 point [-]

I have no objection to this position. However, it does not imply substrate independence, and strongly suggests its negation.

I disagree, and think that in any case substrate independence is of two types. The directions are: replacing basic units with complex units and replacing complex units with other complex units. Replacing basic units with complex units that do the same thing the basic unit did preserves equations that treated the basic unit as basic. I will attempt to explain.

Consciousness is presumably not a unique property of one specific system. If you've been conscious over the course of reading this sentence, multiple physical patterns have been conscious. I am quite different than I was ten years ago and am also quite different than my grandmother and someone living in an uncontacted tribe, also conscious beings. If all humans are conscious, no line between consciousness and non-consciousness will be found within the range of human brain variation.

Whole brains, complex things, can be replaced with giant lookup tables, different complex things, and not have consciousness. The output of "Yes" as an answer to a specific question may be identical between the systems, but the internal computations are different, so it is logically possible that the new computations are not within the wide realm of computations that produce consciousness.

Above I was referring to replacing complex biological units with complex mechanical units, in which "substrate independence" will depend on the specifics of the replacement done. However, all replacement of a unit that is basic with a more complicated unit that will give the same output for each input will leave the conscious system intact as the old equations will not be altered.

For example: suppose that a mechanical system of gears and pulleys produces knives (or consciousness) and clanks. It is possible to replace a gear with a sub-system consisting of: a set of range finders, a computer, mechanical hands, and speakers. The sub-system can measure what surrounding gears are doing and use the hands to spin gears as if the missing gear were in place, and use the speakers to make noises as if the old gear was in place.

Everything produced by the old system will also be produced my the new system, though the new system may also produce something else, such as GTA on the computer. This is because we replaced a basic unit with a more complicated system that produces additional things.

Similarly, replacing biological cells with analogously functional mechanical cells should certainly preserve consciousness. Probably, but not by logical necessity, cells are not needed to produce consciousness as a computed output.

tl;dr: computationalism implies substrate independence insofar as anything upon which computations act may be replaced by anything of any form, with the only requirement being to give the same outputs as the old unit would have. Anything a computation uses by mapping it first may be replaced by anything that would be identically mapped.

Comment author: dfranke 15 April 2011 03:38:47AM 0 points [-]

This sounds an awful lot like "making the same argument that I am, merely in different vocabulary". You say po-tay-to, I say po-tah-to, you say "computations", I say "physical phenomena". Take the example of the spark-plug brain from my earlier post. If the computer-with-spark-plugs-attached is conscious but the computer alone is not, do you still consider this confirmation of substrate independence? If so, then I think you're using an even weaker definition of the term than I am. How about xkcd's desert? If you replace the guy moving the rocks around with a crudely-built robot moving the rocks in the same pattern, do you think it's plausible that anything in that system experiences human-like consciousness? If you say "no", then I don't know whether we're disagreeing on anything.

In response to Levels of Action
Comment author: dfranke 14 April 2011 09:55:52PM 14 points [-]

The most important difference between Level 1 and Level 2 actions is that Level 1 actions tend to be additive, while Level 2 actions tend to be multiplicative. If you do ten hours of work at McDonald's, you'll get paid ten times as much as if you did one hour; the benefits of the hours add together. However, if you take ten typing classes, each one of which improves your ability by 20%, you'll be 1.2^10 = 6.2 times better at the end than at the beginning: the benefits of the classes multiply (assuming independence).

I'm trying to think of anything in life that actually works this way and I can't. If I start out being able to type at 20 WPM, taking 100 typing classes is not going to improve that to 1.6 billion WPM; neither is taking 1000 classes or 10000. These sorts of payoffs tend to be roughly logarithmic, not exponential.

Comment author: pjeby 14 April 2011 07:35:25PM *  3 points [-]

You've missed a major position: that the entire idea of "substrate independence" is a red herring. Detecting the similarity of two patterns is something that happens in your brain, not something that's part of reality.

This whole thing, AFAICT, is an attempt to have an argument war, rather than an attempt to understand/find truth. It is possible that no position on this subject makes any sense whatsoever, for example.

Or, to put it another way, failure to offer a coherent refutation of an incoherent hypothesis doesn't represent evidence for the incoherent hypothesis.

Comment author: dfranke 14 April 2011 07:50:52PM 1 point [-]

Detecting the similarity of two patterns is something that happens in your brain, not something that's part of reality.

If I'm correctly understanding what you mean by "part of reality" here, then I agree. This kind of "similarity" is another unnatural category. When I made reference in my original post to the level of granularity "sufficient in order model all the essential features of human consciousness", I didn't mean this as a binary proposition; just for it to be sufficient that if while you slept somebody made changes to your brain at any smaller level, you wouldn't wake up thinking "I feel weird".

Comment author: shokwave 14 April 2011 05:29:51PM 5 points [-]

To me, the empirical evidence in support of the existence of qualia is so clear and so immediate that I can't figure out what you're not seeing so that I can point to it.

I ... don't think there's much empirical support for the actual existence of the painfulness of pain. Sure, humans experience pain in very similar ways, and you can lump all those experiences into the category pain, and talk about what characteristics are present in all the category members, but those common characteristics aren't a physical object somewhere called painfulness.

As for how this bears on Bostrom's simulation argument: I'm not familiarized with it properly, but how much of its force does it lose by not being able to appeal to consciousness-based reference classes and the like? I can't see how that would make simulations impossible; nearest I can guess is that it harms his conclusion that we are probably in a simulation?

That can be repaired in other ways; given that time travels in one direction for us, our experiences have one chance to be in the real universe, and n chances to be in simulated universes - where n is the total computational power that ever will be directed at simulating historical moments, over the computational cost of simulating a historical moment multiplied by the number of moments at least as interesting as this one. Even if you assign a low probability to the future containing computational power (ie we nuke ourselves before Matroishka shells or Jupiter brains are completed or something), that low chance times n is still large relative to 1. So our prior for being in a simulation should still be high.

Comment author: dfranke 14 April 2011 06:31:11PM *  0 points [-]

As for how this bears on Bostrom's simulation argument: I'm not familiarized with it properly, but how much of its force does it lose by not being able to appeal to consciousness-based reference classes and the like? I can't see how that would make simulations impossible; nearest I can guess is that it harms his conclusion that we are probably in a simulation?

Right. All the probabilistic reasoning breaks down, and if your re-explanation patches things at all I don't understand how. Without reference to consciousness I don't know how to make sense of the "our" in "our experiences". Who is the observer who is sampling himself out of a pool of identical copies?

Anthropics is confusing enough to me that it's possible that I'm making an argument whose conclusion doesn't depend on its hypothesis, and that the argument I should actually be making is that this part of Bostrom's reasoning is nonsense regardless of whether you believe in qualia or not.

Comment author: zaph 14 April 2011 05:36:11PM 1 point [-]

I guess the only quibble I would have, and I don't know that it really changes your critique much, is that I wrote that neurons would be some sort of gate equivalent. I wouldn't say that neurons have a simple gate model (that they're simply an AND or an XOR, for instance). But I do see them as being in some sense Boolean. Anyway, I would just try to clarify my fairly short answer to say that I believe that computation can always be broken down into smaller Boolean steps, and these steps could be rendered in many different media.

Computationality in any fashion needs to be reified by physics doesn't it? Otherwise it wouldn't exist. Now, I would say it's an emergent feature; physics doesn't need to provide anything beyond what is provided for anything else to explain it. Maybe that's the point of contention?

Comment author: dfranke 14 April 2011 05:54:37PM *  0 points [-]

I'm not trying to hold you to any Platonic claim that there's any unique set of computational primitives that are more ontologically privileged than others. It's of course perfectly equivalent to say that it's NOR gates that are primitive, or that you should be using gates with three-state rather than two state inputs, or whatever. But whatever set of primitives you settle on, you need to settle on something, and I don't think there's any such something which invalidates my claim about K-complexity when expressed in formal language familiar to physics.

Comment author: JoshuaZ 14 April 2011 03:35:35PM *  2 points [-]

Perplexed intended to contrast science - where it is not respectable to take a position in advance of evidence (pace Karl P.) - with philosophy - where it is the taking and defending of positions which drives the whole process

Thanks for clarifying. Is that true though? If so, I'd suggest that that might be a problem about how we do philosophy more than anything else. If I don't have evidence or good arguments either way on a philosophical question I shouldn't take a stand on it. I should just acknowledge the weak arguments for or against the relevant positions.

Comment author: dfranke 14 April 2011 03:40:24PM *  1 point [-]

There are no specifically philosophical truths, only specifically philosophical questions. Philosophy is the precursor to science; its job is to help us state our hypotheses clearly enough that we can test them scientifically. ETA: For example, if you want to determine how many angels can dance on the head of a pin, it's philosophy's job to either clarify or reject as nonsensical the concept of an angel, and then in the former case to hand off to science the problem of tracking down some angels to participate in a pin-dancing study.

Comment author: JoshuaZ 14 April 2011 02:56:40PM 4 points [-]

I agree, and furthermore this is a true statement regardless of whether you classify the problem as philosophical or scientific. You can't do science without picking some hypotheses to test.

That's not strictly speaking true. First of all, this doesn't quite match what Perplexed said since Perplexed was talking about taking a position. I can decide to test a hypothesis without taking a position on it. Second of all, a lot of good science is just "let's see what happens if I do this." A lot of early chemistry was just sticking together various substances and seeing what happened. Similarly, a lot of the early work with electricity was just systematically seeing what could and could not conduct. It was only later that patterns any more complicated than "metals conduct" developed. (Priestly's The History and Present State of Electricity gives a detailed account of the early research into electricity by someone who was deeply involved in it. The archaic language is sometimes difficult to read but overall the book is surprisingly readable and interesting for something that he wrote in the mid 1700s.)

Comment author: dfranke 14 April 2011 03:04:56PM 0 points [-]

Those early experimenters with electricity were still taking a position whether they knew it or not: namely, that "will this conduct?" is a productive question to ask -- that if p is the subjective probability that it will, then p*(1-p) is a sufficiently large value that the experiment is worth their time.

View more: Prev | Next