All of dfranke's Comments + Replies

dfranke10

She can understand the sequence of chemical reactions that comprises the Calvin cycle just as she can understand what neural impulses occur when red light strikes retinal rods, but she can't form the memory of either one occurring within her body.

0Peterdjones
Which, yet again, only matters if there is something special about qualia that requires memory or instantiation in the body to be understood. She can understand the Calvin Cycle full stop.
dfranke10

They're computationally equivalent by hypothesis. The thesis of substrate independence is that as far as consciousness is concerned the side effects don't matter and that capturing the essential sameness of the "AND" computation is all that does. If you're having trouble understanding this, I can't blame you in the slightest, because it's that bizarre.

0complexmeme
(Didn't realize this site doesn't email reply notifications, thus the delayed response.) What I'm saying is that someone who answers "algorithms" is clearly not taking that view of substrate-independence, but they could hypothesize that only some side-effects matter. A MOSFET-brain-simulation and a desert-rocks-brain-simulation could share computational properties beyond input-output, even though the side-effects are clearly not identical. (Not saying that I endorse that hypothesis, just that it's not the same as the "side effects don't matter" version.)
dfranke00

Yes, I agree that this kind of atomism is silly, and by implication that things like Drescher's gensym analogy are even sillier. Nonetheless, the black box needs a label if we want to do something besides point at it and grunt.

0Perplexed
I'm saying that there wasn't a box until someone felt the need to label something. The various phenomena which are being grouped together as qualia are not (or rather are not automatically) a natural kind.
dfranke30

I should have predicted that somebody here was going to call me on that. I accept the correction.

dfranke10

Maybe this analogy is helpful: saying "qualia" isn't giving us insight into consciousness any more than saying "phlogiston" is giving us insight into combustion. However, that doesn't mean that qualia don't exist or that any reference to them is nonsensical. Phlogiston exists. However, in our better state of knowledge, we've discarded the term and now we call it "hydrocarbons".

-1Peterdjones
The word "qualia" doesn't have to justify its existence by providing a solution. It can justify its use by outlining a problem.
0Perplexed
Not really helpful (though I don't see why it deserved a downvote). It is not that I object to the term 'qualia' because I think it is a residue of discredited worldviews. I object to the term because I have never seen a clear enough exposition of the term so that I could understand/appreciate the concept pulling any weight in an argument. And, as I stated earlier, I particularly object when philosophers offer color qualia as paradigmatic examples of atomic, primitive qualia. Haven't philosophers ever read a science book? Color vision has been well understood for some time. Cones and rods, rods of three kinds, and all that. So color sensation is not primitive. And moving up a level from neurons to mind, I cannot imagine how anyone might suggest that there is a higher-level "experience" of the color green which is so similar to an experience of smell-of-mothballs or an experience of A-major-chord so that all three are instances of the same thing - qualia.
dfranke00

My conclusion in the Mary's room thought experiment doesn't challenge either of these versions: something new happens when she steps outside, and there's a perfectly good purely physical explanation of what and why. It is nothing more than an artifact of how human brains are built that Mary is unable to make the same physical thing happen, with the same result, without the assistance of either red light or appropriate surgical tools. A slightly more evolved Mary with a few extra neurons leading into her hippocampus would have no such difficulty.

-2Peterdjones
Mary still doesn't have to make anything special happen to her brain have knowledge of anything else. She can still understand photosynthesis without photosynthesising.
1TheOtherDave
Incidentally, while agreeing with your main point, I feel I ought to challenge the implications of "more evolved." This has nothing to do with Mary's position on some scale of evolution; she could be "less evolved" and have those neurons, or "more evolved" and lack them.
dfranke40

Can you state what that version is? Whatever it is, it's nothing I subscribe to, and I call myself a physicalist.

0Peterdjones
There are broadly speaking two versions of physicalism: ontological physicalism, according to which everything that exists is material, spatio-temporal, etc; and epistemological physicalism, according to which everything can be explained in physical terms. Physicalism can be challenged by the inexplicability of qualia in two ways. Firstly, qualia might be physically inexplicable because they are not physical things, which contradicts ontological physicalim. Secondly, the phsyical inexplicability of qualia might be down to their having a first-person epistemology, which contradicts epistemological physicalism. Epistemological physicalism requires that eveything be explicable in physical terms, which implies that everything is explicable in objective, descriptive, public, third-person terms. If there are some things which can only be known by acquantance, subjectively, in first person terms, then it is not the case that everything can be explained in physicalese. However, ontological physicalism could still hold.
dfranke00

When she steps outside, something physical happens in her brain that has never happened before. Maybe something "non-physical" (huh?) also happens, maybe it doesn't. We have gained no insight.

-2Peterdjones
If we agree that she learns something on stepping outside we have learnt that a version of physicalism is false.
dfranke20

She is specifically not supposed to be pre-equipped with experiential knowledge, which means her brain is in one of the physical states of a brain that has never seen colour.

Well, then when she steps outside, her brain will be put into a physical state that it's never been in before, and as a result she will feel enlightened. This conclusion gives us no insight whatsoever into what exactly goes on during that state-change or why it's so special, which is why I think it's a stupid thought-experiment.

1Peterdjones
It isn't intended to answer your question about neuroscience.It is intended to pose the philosopher's question about the limitations of physicalism. If physicalism is limited, that eventually folds back to your question, since one way of explaining the limitation of physicalism is that there are non-physical things going on.
dfranke10

The very premise of "Mary is supposed to have that kind of knowledge" implies that her brain is already in the requisite configuration that the surgery would produce. But if it's not already in that configuration, she's not going to be able to get it into that configuration just by looking at the right sequence of squiggles on paper. All knowledge can be represented by a bunch of 1's and 0's, and Mary can interpret those 1's and 0's as a HOWTO for a surgical procedure. But the knowledge itself consists of a certain configuration of neurons, not 1's and 0's.

-2Peterdjones
No, the premise of the Mary argument is that Mary has all possible book-larnin' or third person knowledge. She is specifically not supposed to be pre-equipped with experiential knowledge, which means her brain is in one of the physical states of a brain that has never seen colour. No, she is not going to be able to instantiate a red quale through her book learning: that is not what is at issue. What is at issue is why she would need to. Third person knowledge does not essentially change on translation from book to paper to CD, and for that matter it should not essentially change when loaded into a brain. And in most cases, we think it doesn't. We don't think that the knowledge of photosyhtesis means photsynthesising in your head. You share that the qualiaphobes assumption that there is something special about knowledge of qualia that requires instantiation.
dfranke20

To say that the surgery is required is to say that there is knowledge not conveyed by third persons descriptions, and that is a problem for sweeping claims of physicalism.

No it isn't. All it says is that the parts of our brain that interpret written language are hooked up to different parts of our hippocampus than our visual cortex is, and that no set of signals on one input port will ever cause the hippocampus to react in the same way that signals on the other port will.

-2Peterdjones
But if physicalism is correct, one could understand all that in its entirety from a third person POV, just as one can understand photosynthesis without photosynthesising. And of course, Mary is supposed to have that kind of knowledge. But you think that knowledge of how her brain works from the outside is inadequate, and she has to make changes to her brain so she can view them from the inside.
dfranke40

I think that the "Mary's Room" thought experiment leads our intuitions astray in a direction completely orthogonal to any remotely interesting question. The confusion can be clarified by taking a biological view of what "knowledge" means. When we talk about our "knowledge" of red, what we're talking about is what experiencing the sensation of red did to our hippocampus. In principle, you could perform surgery on Mary's brain that would give her the same kind of memory of red that anyone else has, and given the appropriate tech... (read more)

-2Peterdjones
To say that the surgery is required is a to say that there is knowledge not conveyed by third persons descriptions, and that is a problem for sweeping claims of physicalism. That is the philosophical problem, it is a problem about how successful science could be. The other problem, of figuring out what brains do, is a hard problem, but it is not the same, because it is a problem within science.
dfranke00

Plausible? What does that mean, exactly?

What subjective probability would you assign to it?

Not every substance can perform every sub-part role in a consciousness producing computation, so there's a limit to "independence". Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I'm not sure what you mean.

I don't know what the "usual" point of contention is, but t... (read more)

dfranke00

This sounds an awful lot like "making the same argument that I am, merely in different vocabulary". You say po-tay-to, I say po-tah-to, you say "computations", I say "physical phenomena". Take the example of the spark-plug brain from my earlier post. If the computer-with-spark-plugs-attached is conscious but the computer alone is not, do you still consider this confirmation of substrate independence? If so, then I think you're using an even weaker definition of the term than I am. How about xkcd's desert? If you replace the ... (read more)

1lessdazed
I don't necessarily understand your argument. Recall I don't understand one of your questions. I think you disagree with some of my answers to your questions, but you hinted that you don't think my answers are inconsistent. So I'm really not sure what's going on. Not every substance can perform every sub-part role in a consciousness producing computation, so there's a limit to "independence". Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I'm not sure what you mean. To me, what is important is to establish that there's nothing magical about bio-goo needed for consciousness, and as far as exactly which possible computers are conscious, I don't know. Plausible? What does that mean, exactly?
dfranke230

The most important difference between Level 1 and Level 2 actions is that Level 1 actions tend to be additive, while Level 2 actions tend to be multiplicative. If you do ten hours of work at McDonald's, you'll get paid ten times as much as if you did one hour; the benefits of the hours add together. However, if you take ten typing classes, each one of which improves your ability by 20%, you'll be 1.2^10 = 6.2 times better at the end than at the beginning: the benefits of the classes multiply (assuming independence).

I'm trying to think of anything in lif... (read more)

8[anonymous]
I think that training in orthogonal but complementary skills more closely matches the point being made. For example, training in typing is orthogonal to, well, pretty much anything, and complementary to many things, such as programming or novel writing.
dfranke20

Detecting the similarity of two patterns is something that happens in your brain, not something that's part of reality.

If I'm correctly understanding what you mean by "part of reality" here, then I agree. This kind of "similarity" is another unnatural category. When I made reference in my original post to the level of granularity "sufficient in order model all the essential features of human consciousness", I didn't mean this as a binary proposition; just for it to be sufficient that if while you slept somebody made changes to your brain at any smaller level, you wouldn't wake up thinking "I feel weird".

3pjeby
I have no reason to assume that you couldn't replace me entirely, piece by piece. After all, I have different cells now than I did previously, and will have different cells later, and all the while still perceive myself the same. The only thing weird here, is the idea that I would somehow notice. I mean, if I could notice, it wouldn't be a very good replacement, would it? (Actually, given my experience with mind hacking, my observation is that it's very difficult to notice certain background characteristics of one's thought processes, such that even if a machine translation did introduce a systematic distortion, it seems unlikely to me that anyone would notice it in themselves, at least easily or at first!)
dfranke00

As for how this bears on Bostrom's simulation argument: I'm not familiarized with it properly, but how much of its force does it lose by not being able to appeal to consciousness-based reference classes and the like? I can't see how that would make simulations impossible; nearest I can guess is that it harms his conclusion that we are probably in a simulation?

Right. All the probabilistic reasoning breaks down, and if your re-explanation patches things at all I don't understand how. Without reference to consciousness I don't know how to make sense of th... (read more)

dfranke00

I'm not trying to hold you to any Platonic claim that there's any unique set of computational primitives that are more ontologically privileged than others. It's of course perfectly equivalent to say that it's NOR gates that are primitive, or that you should be using gates with three-state rather than two state inputs, or whatever. But whatever set of primitives you settle on, you need to settle on something, and I don't think there's any such something which invalidates my claim about K-complexity when expressed in formal language familiar to physics.

dfranke20

There are no specifically philosophical truths, only specifically philosophical questions. Philosophy is the precursor to science; its job is to help us state our hypotheses clearly enough that we can test them scientifically. ETA: For example, if you want to determine how many angels can dance on the head of a pin, it's philosophy's job to either clarify or reject as nonsensical the concept of an angel, and then in the former case to hand off to science the problem of tracking down some angels to participate in a pin-dancing study.

dfranke00

Those early experimenters with electricity were still taking a position whether they knew it or not: namely, that "will this conduct?" is a productive question to ask -- that if p is the subjective probability that it will, then p\(1-p)* is a sufficiently large value that the experiment is worth their time.

2JoshuaZ
Ok. Yes, this connects to the theory-laden nature of observation and experimentation. But that's distinct from having any substantial hypotheses about the nature of electricity which would be closer to the sort of thing that would be analogous to what Perplexed was talking about. (It is possible that I'm misinterpreting the statement's intention.)
dfranke20

I didn't list this position because it's out of scope for the topic I'm addressing. I'm not trying to address every position on the simulation hypothesis; I'm trying to address computationalist positions. If you think we are completely in the dark on the matter, you can't be endorsing computationalists, who claim to know something.

dfranke-10

I agree, and furthermore this is a true statement regardless of whether you classify the problem as philosophical or scientific. You can't do science without picking some hypotheses to test.

6JoshuaZ
That's not strictly speaking true. First of all, this doesn't quite match what Perplexed said since Perplexed was talking about taking a position. I can decide to test a hypothesis without taking a position on it. Second of all, a lot of good science is just "let's see what happens if I do this." A lot of early chemistry was just sticking together various substances and seeing what happened. Similarly, a lot of the early work with electricity was just systematically seeing what could and could not conduct. It was only later that patterns any more complicated than "metals conduct" developed. (Priestly's The History and Present State of Electricity gives a detailed account of the early research into electricity by someone who was deeply involved in it. The archaic language is sometimes difficult to read but overall the book is surprisingly readable and interesting for something that he wrote in the mid 1700s.)
dfranke30

I'll save my defense of these answers for my next post, but here are my answers:

  1. Both of them.
  2. Yes. The way I understand these words, this is a tautology.
  3. No. Actually, hell no.
  4. N/A
  5. Yes; a. I'm not quite sure how to make sense of "probability" here, but something strictly between 0 and 1; b. Yes.
  6. Negligibly larger than 0.
  7. 1, tautologically.
  8. For the purposes of this discussion, "No". In an unrelated discussion about epistemology, "No, with caveats."
  9. This question is nonsense.
  10. No.
  11. If I answered "yes" to this, it wou
... (read more)
dfranke00

I haven't read that other thread; can I ask what your opinions are? Briefly of course, and while I can't speak for everyone else, I promise to read them as thumbnails and not absolute statements to be used against you. You could point to writers (Searle? Penrose?) if you like.

Searle, to a zeroth approximation. His claims need some surgical repair, but you can do that surgery without killing the patient. See my original post for some "first aid".

dfranke00

I also think there is a big difference between c) "nonsensical" and c) "irrelevant".

I didn't mean to imply otherwise. I meant the "or" there as a logical inclusive or, not a claim of synonymy.

dfranke00

I'm not sure what you mean by an abstract machine (and please excuse me if that's a formal term)

I'd certainly regard anything defined within the framework of automata theory as an abstract machine. I'd probably accept substitution of a broader definition.

dfranke00

s/are not zombies/have qualia/ and you'll get a little more accurate. A zombie, supposing such a thing is possible (which I doubt for all the reasons given in http://lesswrong.com/lw/p7/zombies_zombies ), is still a real, physical object. The objects of a simulation don't even rise to zombie status.

4Jonathan_Graehl
It's really unclear what you mean by 'zombie', 'real, physical object', and 'objects of a simulation'. But you're right that Kevin meant by 'zombie' exactly 'us without qualia'. I thought this was obvious in context.
0jsalvatier
If you are not arguing for zombies, I am really confused about what you're trying to argue for.
2Kevin
What is a physical object?
dfranke50

No, rather:

A) "We are not living in a simulation" = P(living in a simulation) < ε.

B) "we cannot be living in a simulation" = P(living in a simulation) = 0.

I believe A but not B. Think of it analogously to weak vs. strong atheism. I'm a weak atheist with respect to both simulations and God.

0Cyan
Ah, got it. Thanks.
dfranke10

The claim that the simulated universe is real even though its physics are independent of our own seem to imply a very broad definition of "real" that comes close to Tegmarck IV. I've posted a followup to my article to the discussion section: Eight questions for computationalists. Please to reply to it so I can better understand your position.

dfranke10

This is just a pedantic technical correction since we agree on all the practical implications, but nothing involving FSMs grows nearly as fast as Busy Beaver. The relevant complexity class for the hardest problems concerning FSMs, such as determining whether two regular expressions represent the same language, is the class of EXPSPACE-complete problems. This is as opposed to R for decidable problems, and RE and co-RE for semidecidable problems like the halting problem. Those classes are way, WAY bigger than EXPSPACE.

dfranke20

Brains, like PCs, aren't actually Turing-equivalent: they only have finite storage. To actually be equivalent to a Turing machine, they'd need something equivalent to a Turing machine's infinite tape. There's nothing analogous to Rice's theorem or the halting theorem which holds for finite state machines. All those problems are decidable. Of course, decidable doesn't mean tractable.

1gwern
It is true that you can run finite state machines until they either terminate or start looping or run past the Busy Beaver for that length of tape; but while you may avoid Rice's theorem by pointing out that 'actually brains are just FSMs', you replace it with another question, 'are they FSMs decidable within the length of tape available to us?' Given how fast the Busy Beaver grows, the answer is almost surely no - there is no runnable algorithm. Leading to the dilemma that either there are insufficient resources (per above), or it's impossible in principle (if there are unbounded resources there likely are unbounded brains and Rice's theorem applies again). (I know you understand this because you pointed out 'Of course, decidable doesn't mean tractable.' but it's not obvious to a lot of people and is worth noting.)
dfranke60

I just hit reload at sufficiently fortuitous times that I was able to see all my comments drop by exactly one point within a minute or so of each other, then later see the same thing happen to exactly those comments that it didn't happen to before.

0[anonymous]
I downvoted most of your comments in this thread too, for what it is worth. With very few exceptions I downvote all comments and posts advocating 'qualia'. Because qualia are stupid, have been discussed here excessively and those advocating them tend to be completely immune to reason. Most of the comments downvoted by this heuristic happen to be incidentally worth downvoting based on individual (lack of) merit.
dfranke-10

The only role that this example-of-an-idea is playing in my argument is as an analogy to illustrate what I mean when I assert that qualia physically exist in the brain without there being such thing as a "qualia cell". You clearly already understand this concept, so is my particular choice of analogy so terribly important that it's necessary to nitpick over this?

-3FAWS
The very same uncertainty would also apply to qualia (assuming that even is a meaningful concept), only worse because we understand them even less. If we can't answer the question of whether a particular concept is embedded in discrete anatomy, how could we possibly answer that question for qualia when we can't even verify their existence in the first place?
dfranke-20

Try reading it as "the probability that we are living in a simulation is negligibly higher than zero".

1Cyan
I tried it. It didn't help. No joke -- I'm completely confused: the referent of "it" is not clear to me. Could be the apparent contradiction, could be the title... Here's what I'm not confused about: (i) your post only argues against Bostrom's simulation argument; (ii) it seems you also want to defend yourself against the charge that your title was poorly chosen (in that it makes a broader claim that has misled your readership); (iii) your defense was too terse to make it into my brain.
-1[anonymous]
That I agree with, though not for reasons brought up here.
dfranke-10

Do you mean, "know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?". No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.

1gwern
Depending on various details, this might well be impossible. Rice's theorem comes to mind - if it's impossible to perfectly determine any interesting property for arbitrary Turing machines, that doesn't bode well for similar questions for Turing-equivalent substrates.
-3FAWS
Yes Potential, easily accessible concept space, not necessarily actually used concept space. Even granting the brain using some concepts without corresponding discrete anatomy I don't see how they can serve as a replacement in your argument when we can't identify them.
dfranke00

My own experience with analytic philosophy is that it is not particularly effective in shutting down pointless speculation.

Oh, certainly not. Not in the least. Think of it this way. Pre-analytic philosophy is like a monkey throwing darts at a dartboard. Analytic philosophy is like a human throwing them. There's no guarantee that he'll hit the board, much less the bullseye, but at least he understands where he's supposed to aim.

dfranke-30

we cannot be in a simulation

We are not living in a simulation

These things are not identical.

0Cyan
So you would assert that we can be in a simulation, but not living in it...?
dfranke-10

I'm pretty sure you don't think that qualia are reified in the brain-- that a surgeon could go in with tongs and pull out a little lump of qualia

I do think that qualia are reified in the brain. I do not think that a surgeon could go in with tongs and remove them any more than he could in with tongs and remove your recognition of your grandmother.

If qualia and other mental phenomena are not computational, then what are they?

They're a physical effect caused by the operation of a brain, just as gravity is a physical effect of mass and temperature is a... (read more)

5wnoise
See for instance this report * http://www.scientificamerican.com/article.cfm?id=one-face-one-neuron on this paper * http://www.nature.com/nature/journal/v435/n7045/full/nature03687.html Where they find apparent "Jennifer Anniston" and "Halle Berry" cells. The former is a little bit muddled as it doesn't fire when a picture contains both her and Brad Pitt. The latter fires for both pictures of her, and the text of her name.
0FAWS
Do we know enough to tell for sure?
0Sideways
You haven't excluded a computational explanation of qualia by saying this. You haven't even argued against it! Computations are physical phenomena that have meaningful consequences. "Mental phenomena are a physical effect caused by the operation of a brain." "The image on my computer monitor is a physical effect caused by the operation of the computer." I'm starting to think you're confused as a result of using language in a way that allows you to claim computations "don't exist," while qualia do. As to your linked comment: ISTM that qualia are what an experience feels like from the inside. Maybe it's just me, but qualia don't seem especially difficult to explain or understand. I don't think qualia would even be regarded as worth talking about, except that confused dualists try to use them against materialism.
dfranke-10

I can't figure out whether you're trying to agree with me or disagree with me. You comment sounds argumentative, yet you seem to be directly paraphrasing my critique of Searle.

dfranke-10

Even granting you all of your premises, everything we know about brains and qualia we know by observing it in this universe. If this universe is in fact a simulation, then what we know about brains and qualia is false. At the very most, your argument shows that we cannot create a simulation. It does not prove that we cannot be in a simulation, because we have no idea what the physics of the real world would be like.

Like pjeby, you're attacking a claim much stronger than the one I've asserted. I didn't claim we cannot be in a simulation. I claimed that... (read more)

7Cyan
Then it's from your title that people might get the impression you're making a stronger claim than you mean to be.

I didn't claim we cannot be in a simulation.

Then the title, "We are not living in a simulation" was rather poorly chosen.

Deductive logic allows me to reliably predict that a banjo will fall if I drop it, even if I have never before observed a falling banjo, because I start with the empirically-acquired prior that, in general, dropped objects fall.

Observation gives you, "on Earth, dropped objects fall." Deduction lets you apply that to a specific hypothetical. You don't have observation backing up the theory you advance in this ar... (read more)

dfranke-40

I don't think you read my post very carefully. I didn't claim that qualia are a phenomenon unique to human brains. I claimed that human-like qualia are a phenomenon unique to human brains. Computers might very well experience qualia; so might a lump of coal. But if you think a computer simulation of a human experiences the same qualia as a human, while a lump of coal experiences no qualia or different ones, you need to make that case to me.

5pjeby
Actually, I'd say you need to make a case for WTF "qualia" means in the first place. As far as I've ever seen, it seems to be one of those words that people use as a handwavy thing to prove the specialness of humans. When we know what "human qualia" reduce to, specifically, then we'll be able to simulate them. That's actually a pretty good operational definition of "reduce", actually. ;-) (Not to mention "know".)
dfranke80

The guy who downvoted that one downvoted all the rest of my comments in this thread at the same time. Actually, he downvoted most of them earlier, then picked that one up in a second sweep of those comments that I had posted since he did his first pass. So, your assumption that the downvote had anything to do with the content of that particular comment is probably misguided.

-13AstroCJ
2thomblake
Where do you get such specific information about those who vote on your comments?
1shokwave
However, wnoise's comment scored the grandparent an upvote from me, and possibly from others too!
dfranke10

On further reflection, I'm not certain that your position and mine are incompatible. I'm a personal identity skeptic in roughly the same sense that you're a qualia skeptic. Yet, if somebody points out that a door is open when it was previously closed, and reasons "someone must have opened it", I don't consider that reasoning invalid. I just think the need to modify the word "someone" if they want to be absolutely pedantically correct about what occurred. Similarly, your skepticism about qualia doesn't really contradict my claim that... (read more)

dfranke120

The interpretation that you deem uncharitable is the one I intended.

6Perplexed
OK, then. It seems we have another example of the great philosophical principle YMMV. My own experience with analytic philosophy is that it is not particularly effective in shutting down pointless speculation. I would have guessed that the schoolmen would have been more enlightened and satisfied by an analogy than by anything they might find in Quine. "The talking head," I would explain, "is like an image seen in a reflecting pool. The image feels no pain, nor is it capable of independent action. The masters, from which the image is made are a whole man and woman, not disembodied heads. And the magic which transfers their image to the box does no more harm to the originals than would ripples in a reflecting pool."
wnoise100

Community: clarifications like this are vital, and to be encouraged. Please don't downvote them.

dfranke00

I did a better job of phrasing my question in the edit I made to my original post than I did in my reply to Sideways that you responded to. Are you able to rephrase your response so that it answers the better version of the question? I can't figure out how to do so.

1AstroCJ
Ok, I'll give a longer response a go. You seem to me to be fundamentally confused about the separation between the (at a minimum) two levels of reality being proposed. We have a simulation, and we have a real world. If you affect things in the simulation, such as replacing Venus with a planet twice the mass of Venus, then they are not the same; the gravitational field will be different and the simulation will follow a path different to the simulation with the original Venus. These two options are not "computationally the same". If, on the other hand, in the real world you replace your old, badly programmed Venus Simulation Chip 2000 with the new, shiny Venus Simulation Chip XD500, which does precisely the same thing as the old chip but in fewer steps so we in the real world have to sit around waiting for fewer processor cycles to end, then the simulation will follow the same path as it would have done before. Observers in the sim won't know what Venus Chip we're running, and they won't know how many processor cycles it's taking to simulate it. These two different situations are "computationally the same". If, in the simulation world, you replaced half of my brain with an apple, then I would be dead. If you replaced half of my brain with a computer that mimicked perfectly my old meat brain, I would be fine. If we're in the computation world then we should point out that again, the gravitational field of my brain computer will likely be different from the gravitational field of my meat brain, and so I would label these as "not computationally the same" for clarity. If we are interested in my particular experiences of the world, given that I can't detect gravitational fields very well, then I would label them as "computationally the same" if I am substrate independent, and "computationally different" if not. I grew up in this universe, and my consciousness is embedded in a complex set of systems, my human brain, which is designed to make things make sense at any co
dfranke00

I apologize if this is recapitulating earlier comments -- I haven't read this entire discussion -- and feel free to point me to a different thread if you've covered this elsewhere, but: on your view, could a simulation of me in a computer classify the things that it has (which, on your view, cannot be actual qualia) into categories like "pleasant" and "unpleasant" and "indifferent"? Could it tell me that certain (simulations of) meat tastes like chicken, and if it did, could I understand what it meant by "taste" and

... (read more)
pjeby110

a computer program could make those judgements (sic) without actually experiencing any of those qualia

Just as an FYI, this is the place where your intuition is blindsiding you. Intuitively, you "know" that a computer isn't experiencing anything... and that's what your entire argument rests on.

However, this "knowing" is just an assumption, and it's assuming the very thing that is the question: does it make sense to speak of a computer experiencing something?

And there is no reason apart from that intuition/assumption, to treat this a... (read more)

2TheOtherDave
Sure, ^simulator^simulation preserves everything relevant from my pov. And thanks for the answer. Given that, I really don't get how the fact that you can do all of the things you list here (classify stuff, talk about stuff, etc.) should count as evidence that you have non-epiphenomenal qualia, which seems to be what you are claiming there. After all, if you (presumed qualiaful) can perform those tasks, and a (presumed qualialess) simulator of you also can perform those tasks, then the (presumed) qualia can't play any necessary role in performing those tasks. It follows that those tasks can happen with or without qualia, and are therefore not evidence of qualia and not reliable qualia-comparing operations. The situation would be different if you had listed activities, like attracting mass or orbiting around Jupiter, that my simulator does not do. For example, if you say that your qualia are not epiphenomenal because you can do things like actually taste chicken, which your simulator can't do, that's a different matter, and my concern would not apply. (Just to be clear: it's not obvious to me that your simulator can't taste chicken, but I don't think that discussion is profitable, for reasons I discuss here.)
dfranke00

Philosophical speculation regarding cognition in our present state of ignorance is just about as useful as would be disputation by medieval philosophers confronted with a 21st century TV newscast - wondering whether the disembodied talking heads appearing there experience pain.

I don't think this is quite fair. The concept that medieval philosophers were missing was analytic philosophy, not cathode rays. If the works of Quine and Popper and Wittgenstein fell through a time warp, it'd be plausible that medieval philosophers could have made legitimate headway on such a question.

5Perplexed
I sincerely don't understand what you are saying here. The most natural parsing is that a medieval philosopher could come to terms with the concept of a disembodied talking head, if only he read some Quine, Popper, and Wittgenstein first. Yet, somehow, that interpretation seems uncharitable. If you are instead suggesting that the schoolmen would be able to understand Quine, Popper, and Wittgenstein, should their works magically be transmitted back in time, then I tend to agree. But I don't think of this 'timeless quality' as a point recommending analytic philosophy.
dfranke10

Ok, I've really misunderstood you then. I didn't realize that you were taking a devil's advocate position in the other thread. I maintain the arguments I've made in both threads in challenge to all those commenters who do claim that qualia are computation.

dfranke-10

I'm trying to understand your objection, but it seems like a quibble to me. You seem to be saying that the analogy between qualia and gensyms isn't perfect because gensyms are leaky abstractions. But I don't think it has to be to convey the essential idea. Analogies rarely are perfect.

You haven't responded to the broader part of my point. If you want to claim that qualia are computations, then you either need to specify a particular computer architecture, or you need to describe them in a way that's independent of any such choice. In the the first case... (read more)

3[anonymous]
You seem to be mixing up two separate arguments. In one argument I am for the sake of argument assuming the unproblematic existence of qualia and arguing, under this assumption, that qualia are possible in a simulation and therefore that we could (in principle) be living in a simulation. In the other argument (the current one) I simply answered your question about what sort of qualia skeptic I am. So, in this argument, the current one, I am continuing the discussion where, in answer to your question, I have admitted to being a qualia skeptic more or less along the lines of Drescher and Dennett. This discussion is about my skepticism about the idea of qualia. This discussion is not about whether I think qualia are computations. It is about my skepticism. Similarly, if I were admitting to skepticism about Santa Claus, it would not be an appropriate place to argue with me about whether Santa is a human or an elf. Maybe you are basing your current focus on computations on Drescher's analogy with Lisp's gensyms. That's something for you to take up with Drescher. By now I've explained - at some length - what it is that resonated with me in Drescher's account and why. It doesn't depend on qualia being computations. It depends on there being a limit to perception.
dfranke10

But qualia are not any of those things! They are not epiphenomenal! They can be compared. I can classify them into categories like "pleasant", "unpleasant" and "indifferent". I can tell you that certain meat tastes like chicken, and you can understand what I mean by "taste", and understand the gist of "like chicken" even if the taste is not perfectly indistinguishable from that of chicken. I suppose that I would be unable to describe what it's like to have qualia to something that has no qualia whatsoever... (read more)

0TheOtherDave
I apologize if this is recapitulating earlier comments -- I haven't read this entire discussion -- and feel free to point me to a different thread if you've covered this elsewhere, but: on your view, could a simulation of me in a computer classify the things that it has (which, on your view, cannot be actual qualia) into categories like "pleasant" and "unpleasant" and "indifferent"? Could it tell me that certain (simulations of) meat tastes like chicken, and if it did, could I understand what it meant by "taste" and understand the gist of "like chicken"? If not, then on your view, what would actually happen instead, if it tried? (Or, if trying is another thing that can't be a computation, then: if it simulated me trying?) If so, then on your view, how can any of those operations qualify as comparing qualia?
Load More