Kawoomba comments on Rationality Quotes June 2013 - Less Wrong

3 Post author: Thomas 03 June 2013 03:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (778)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kawoomba 06 June 2013 10:06:39AM 0 points [-]

Learn to recognize that the parts of your brain that handle text generation and output are no more "you" than the parts of your brain that handle motor reflex control.

I'd certainly call them much more significant to my identity than a e.g. my deltoid muscle, or some motor function parts of my brain.

Comment author: ialdabaoth 07 June 2013 03:44:34AM 3 points [-]

I'd certainly call them much more significant to my identity than a e.g. my deltoid muscle, or some motor function parts of my brain.

It may be useful to recognize that this is a choice, rather than an innate principle of identity. The parts that speak are just modules, just like the parts that handle motor control. They can (and often do) run autonomously, and then the module that handles generating a coherent narrative stitches together an explanation of why you "decided" to cause whatever they happened to generate.

Comment author: RichardKennaway 07 June 2013 09:27:34AM 1 point [-]

This sounds like a theory of identity as epiphenomenal homunculus. A module whose job is to sit there weaving a narrative, but which has no effect on anything outside itself (except to make the speech module utter its narrative from time to time). "Mr Volition", as Greg Egan calls it in one of his stories. Is that your view?

Comment author: ialdabaoth 07 June 2013 09:42:40AM 1 point [-]

More or less, yes. It does have some effect on things outside itself, of course, in that its 'narrative' tends to influence our emotional investment in situations, which in turn influences our reactions.

Comment author: RichardKennaway 07 June 2013 05:47:48PM 3 points [-]

It seems to me that the Mr. Volition theory suffers from the same logical flaw as p-zombies. How would a non-conscious entity, a p-zombie, come to talk about consciousness? And how does an epiphenomenon come to think it's in charge, how does it even arrive at the very idea of "being in charge", if it was never in charge of anything?

An illusion has to be an illusion of something real. Fake gold can exist only because there is such a thing as real gold. There is no such thing as fake mithril, because there is no such thing as real mithril.

Comment author: CCC 08 June 2013 02:35:02PM 2 points [-]

How would a non-conscious entity, a p-zombie, come to talk about consciousness?

A tape recorder is a non-conscious entity. I can get a tape recorder to talk about consciousness quite easily.

Or are you asking how it would decide to talk about consciousness? It's a bit ambiguous.

Comment author: FeepingCreature 07 June 2013 11:36:21PM *  2 points [-]

I think it's not an epiphenomenon, it's just wired in more circuitously than people believe. It has effects; it just doesn't have some effects that we tend to ascribe to it, like decisionmaking and highlevel thought.

Comment author: ialdabaoth 07 June 2013 06:55:36PM 3 points [-]

By that analogy, then, fake gods can exist only because there is such a thing as real gods; fake ghosts can only exist because there is such a thing as real ghosts; fake magic can only exist because there is such a thing as real magic.

It's perfectly possible to be ontologically mistaken about the nature of one's world.

Comment author: RichardKennaway 08 June 2013 06:24:34AM *  0 points [-]

By that analogy, then, fake gods can exist only because there is such a thing as real gods; fake ghosts can only exist because there is such a thing as real ghosts; fake magic can only exist because there is such a thing as real magic.

It's perfectly possible to be ontologically mistaken about the nature of one's world.

Indeed. There is real agency, so people have imagined really big agents that created and rule the world. People's consciousness persists, even after the interruptions of sleep, and they imagine it persists even after death. People's actions appear to happen purely by their intention, and they imagine doing arbitrary things purely by intention. These are the real things that the fakes, pretences, or errors are based on.

But how do the p-zombie and the homunculus even get to the point of having their mistaken ontology?

Comment author: ialdabaoth 08 June 2013 06:40:52AM *  2 points [-]

The p-zombie doesn't, because the p-zombie is not a logically consistent concept. Imagine if there was a word that meant "four-sided triangle" - that's the level of absurdity that the 'p-zombie' idea represents.

On the other hand, the epiphenomenal consciousness (for which I'll accept the appelature 'homunculus' until a more consistent and accurate one occurs to me) is simply mistaken in that it is drawing too large a boundary in some respects, and too small a boundary in others. It's drawing a line around certain phenomena and ascribing a causal relationship between those and its own so-called 'agency', while excluding others. The algorithm that draws those lines doesn't have a particularly strong map-territory correlation; it just happens to be one of those evo-psych things that developed and self-reinforced because it worked in the ancestral environment.

Note that I never claimed that "agency" and "volition" are nonexistent on the whole; merely that the vast majority of what people internally consider "agency" and "volition", aren't.

EDIT: And I see that you've added some to the comment I'm replying to, here. In particular, this stood out:

People's consciousness persists, even after the interruptions of sleep, and they imagine it persists even after death.

I don't believe that "my" consciousness persists after sleep. I believe that a new consciousness generates itself upon waking, and pieces itself together using the memories it has access to as a consequence of being generated by "my" brain; but I don't think that the creature that will wake up tomorrow is "me" in the same way that I am. I continue to use words like "me" and "I" for two reasons:

  1. Social convenience - it's damn hard to get along with other hominids without at least pretending to share their cultural assumptions

  2. It is, admittedly, an incredibly persistent illusion. However, it is a logically incoherent illusion, and I have upon occasion pierced it and seen others pierce it, so I'm not entirely inclined to give it ontological reality with p=1.0 anymore.

Comment author: TheOtherDave 08 June 2013 03:15:38PM 0 points [-]

Do you believe that the creature you are now (as you read this parenthetical expression) is "you" in the same way as the creature you are now (as you read this parenthetical expression)?

If so, on what basis?

Comment author: ialdabaoth 08 June 2013 08:17:00PM 1 point [-]

Yes(ish), on the basis that the change between me(expr1) and me(expr2) is small enough that assigning them a single consistent identity is more convenient than acknowledging the differences.

But if I'm operating in a more rigorous context, then no; under most circumstances that appear to require epistemological rigor, it seems better to taboo concepts like "I" and "is" altogether.

Comment author: khafra 07 June 2013 07:32:08PM 1 point [-]

There is no such thing as fake mithril, because there is no such thing as real mithril.

Mesh mail "mithril" vest, $335.

Setting aside the question of whether this is fake iron man armor, or a real costume of the fake iron man, or a fake costume designed after the fake iron man portrayed by special effects artists in the movies, I think an illusion can be anything that triggers a category recognition by matching some of the features strongly enough to trigger the recognition, while failing to match on a significant amount of the other features that are harder to detect at first.

Comment author: RichardKennaway 08 June 2013 06:22:33AM 2 points [-]

Mesh mail "mithril" vest, $335.

That's not fake mithril, it's pretend mithril.

I think an illusion can be anything that triggers a category recognition by matching some of the features strongly enough to trigger the recognition

To have the recognotion, there must have already been a category to recognise.

Comment author: TheOtherDave 07 June 2013 10:50:53PM *  1 point [-]

An illusion has to be an illusion of something real. Fake gold can exist only because there is such a thing as real gold. There is no such thing as fake mithril, because there is no such thing as real mithril.

Suppose I am standing next to a wall so high that I am left with the subjective impression that it just goes on forever and ever, with no upper bound. Or next to a chasm so deep that I am left with the subjective impression that it's bottomless.

Would you say these subjective impressions are impossible?
If possible, would you say they aren't illusory?

My own answer would be that such subjective impressions are both illusory and possible, but that this is not evidence of the existence of such things as real bottomless pits and infinitely tall walls. Rather, they are indications that my imagination is capable of creating synthetic/composite data structures.

Comment author: Qiaochu_Yuan 07 June 2013 11:38:59PM *  0 points [-]

How would a non-conscious entity, a p-zombie, come to talk about consciousness?

I scrawl on a rock "I am conscious." Is the rock talking about consciousness?

Comment author: RichardKennaway 08 June 2013 06:18:41AM 1 point [-]

No, you are.

Comment author: Qiaochu_Yuan 08 June 2013 06:40:42AM *  0 points [-]

I run a program that randomly outputs strings. One day it outputs the string "I am conscious." Is the program talking about consciousness? Am I?

Comment author: RichardKennaway 08 June 2013 06:43:16AM 1 point [-]
Comment author: Qiaochu_Yuan 08 June 2013 08:43:41AM *  -1 points [-]

Maybe I'm being unnecessarily cryptic. My point is that when you say that something is "talking about consciousness," you're assigning meaning to what is ultimately a particular sequence of vibrations of the air (or a particular pattern of pigment on a rock, or a particular sequence of ASCII characters on a screen). I don't need a soul to "talk about souls," and I don't need to be conscious to "talk about consciousness": it just needs to happen to be the case that my mouth emits a particular sequence of vibrations in the air that you're inclined to interpret in a particular way (but that interpretation is in your map, not the territory).

In other words, I'm trying to dissolve the question you're asking. Am I making sense?

Comment author: nshepperd 08 June 2013 01:02:34AM *  0 points [-]

In the same way that a philosophy paper does... yes. Of course, the rock is just a medium for your attempt at communication.

Comment author: Randaly 08 June 2013 01:30:08AM 0 points [-]

I write a computer program that outputs every possible sequence of 16 characters to a different monitor. Is the monitor which outputs 'I am conscious' talking about consciousness in the same way the rock is? Whose attempt at communication is it a medium for?

Comment author: nshepperd 08 June 2013 05:21:02AM 1 point [-]

Your decision to point out the particular monitor displaying this message as an example of something imparts information about your mental state in exactly the same way that your decision to pick a particular sequence of 16 characters out of platonia to engrave on a rock does.

See also: on GLUTs.

Comment author: ialdabaoth 08 June 2013 01:35:04AM 0 points [-]

Whose attempt at communication is it a medium for?

The reader's. Paradolia is a signal-processing system's attempt to find a signal.

On a long enough timeline, all random noise generators become hidden word puzzles.

Comment author: Juno_Watt 08 June 2013 01:34:47PM -1 points [-]

.> How would a non-conscious entity, a p-zombie, come to talk about consciousness?

By functional equivalence. A zombie Chalmers is bound to will utter sentences asserting its possession of qualia, a zombie Dennett will utter sentences denying the same.

The only getout is to claim that it is not really talking at all.

Comment author: RichardKennaway 10 June 2013 09:15:16AM 1 point [-]

The epiphenomenal homunculus theory claims that there's nothing but p-zombies, so there are no conscious beings for them to be functionally equivalent to. After all, as the alien that has just materialised on my monitor has pointed out to me, no humans have zardlequeep (approximate transcription), and they don't go around insisting that they do. They don't even have the concept to talk about.

Comment author: Juno_Watt 16 June 2013 01:19:44PM 0 points [-]

The theory that there is nothing but zombies runs into the difficulty of explaining why many of them would believe they are non-zombies. The standard p-zombie argument, that you can have qualia-less functional duplicates of non-zombies does not have that problem.

Comment author: Locaha 16 June 2013 01:51:37PM 4 points [-]

The theory that there is nothing but zombies runs into the much bigger difficulty of explaining to myself why I'm a zombie. When I poke myself with a needle, I sure as hell have the qualia of pain.

And don't tell me it's an illusion - any illusion is a qualia by itself.

Comment author: Juno_Watt 16 June 2013 05:14:08PM 1 point [-]

Don't tell me tell Dennett

Comment author: RichardKennaway 16 June 2013 02:07:24PM *  2 points [-]

The standard p-zombie argument still has a problem explaining why p-zombies claim to be conscious. It leaves no role for consciousness in explaining why conscious humans talk of being conscious. It's a short road (for a philosopher) to then argue that consciousness plays no role, and we're back with consciousness as either an epiphenomenon or non-existent, and the problem of why -- especially when consciousness is conceded to exist, but cause nothing -- the non-conscious system claims to be conscious.

Comment author: nshepperd 17 June 2013 01:21:56AM *  1 point [-]

Even worse, the question of how the word "conscious" can possibly even refer to this thing that is claimed to be epiphenomenal, since the word can't have been invented in response to the existence or observations of consciousness (since there aren't any observations). And in fact there is nothing to allow a human to distinguish between this thing, and every other thing that has never been observed, so in a way the claim that a person is "conscious" is perfectly empty.

ETA: Well, of course one can argue that it is defined intensionally, like "a unicorn is a horse with a single horn extending from its head, and [various magical properties]" which does define a meaningful predicate even if a unicorn has never been seen. But in that case any human's claim to have a consciousness is perfectly evidence-free, since there are no observations of it with which to verify that it (to the extent that you can even refer to a particular unobservable thing) has the relevant properties.

Comment author: Juno_Watt 16 June 2013 05:13:02PM 0 points [-]

The standard p-zombie argument still has a problem explaining why p-zombies claim to be conscious. It leaves no role for consciousness in explaining why conscious humans talk of being conscious.

Yes. Thats the standard epiphenomenalism objection.

. It's a short road (for a philosopher) to then argue that consciousness plays no role,

Often a bit too short.

Comment author: Estarlio 08 June 2013 03:14:49PM -1 points [-]

Why would we have these modules that seem quite complex, and likely to negatively effect fitness (thinking's expensive), if they don't do anything? What are the odds of this becoming a prevalent without a favourable selection pressure?

Comment author: ialdabaoth 08 June 2013 08:19:10PM 1 point [-]

High, if they happen to be foundational.

Sometimes you get spandrels, and sometimes you get systems built on foundations that are no longer what we would call "adaptive", but that can't be removed without crashing systems that are adaptive.

Comment author: TheOtherDave 08 June 2013 07:04:03PM 1 point [-]

Evo-psych just-so stories are cheap.

Here's one: it turns out that ascribing consistent identity to nominal entities is a side-effect of one of the most easily constructed implementations of "predict the behavior of my environment." Predicting the behavior of my environment is enormously useful, so the first mutant to construct this implementation had a huge advantage. Pretty soon everyone was doing it, and competing for who could do it best, and we had foreclosed the evolutionary paths that allowed environmental prediction without identity-ascribing. So the selection pressure for environmental prediction also produced (as an incidental side-effect) selection pressure for identity-ascribing, despite the identity-ascribing itself being basically useless, and here we are.

I have no idea if that story is true or not; I'm not sure what I'd expect to see differentially were it true or false. My point is more that I'm skeptical of "why would our brains do this if it weren't a useful thing to do?" as a reason for believing that everything my brain does is useful.