Dennett's heterophenomenology

5 RichardKennaway 16 January 2010 08:40PM

In an earlier comment, I conflated heterophenomenology in the general sense of taking introspective accounts as data to be explained rather than direct readouts of the truth, with Dennett's particular approach to explaining those data.  So to correct myself, I say that it is Dennett, rather than heterophenomenology, that claims that there is no such thing as consciousness. Dennett denies that he does, but I disagree. I defend this view here.

I have to admit at this point that I have not read "Consciousness Explained".  Had either of the library's copies been on the shelves last Tuesday I would have done by now, but instead I found his later book (and his most recent on the topic), "Sweet Dreams: Philosophical Obstacles to a Science of Consciousness".  The subtitle suggests a drawing back from the confidence of the earlier title, as does that of the book in between.  The book confirms me in my impression that the ideas of "C.E." have been in the air so long (the air of hard SF, sciblogs, and the like, not to mention Phil Goetz's recent posts) that reading the primary source 19 years on would be nothing more than an exercise in checkbox-ticking.

I'll give a brief run-through of "Sweet Dreams" and then carry on the argument.

continue reading »

Consciousness

2 Mitchell_Porter 08 January 2010 12:18PM

(ETA: I've created three threads - color, computation, meaning - for the discussion of three questions posed in this article. If you are answering one of those specific questions, please answer there.)

I don't know how to make this about rationality. It's an attack on something which is a standard view, not only here, but throughout scientific culture. Someone else can do the metalevel analysis and extract the rationality lessons.

The local worldview reduces everything to some combination of physics, mathematics, and computer science, with the exact combination depending on the person. I think it is manifestly the case that this does not work for consciousness. I took this line before, but people struggled to understand my own speculations and this complicated the discussion. So the focus is going to be much more on what other people think - like you, dear reader. If you think consciousness can be reduced to some combination of the above, here's your chance to make your case.

The main exhibits will be color and computation. Then we'll talk about reference; then time; and finally the "unity of consciousness".

continue reading »

How to think like a quantum monadologist

-14 Mitchell_Porter 15 October 2009 09:37AM

Half the responses to my last article focused on the subject of consciousness, understandably so. Back when LW was still part of OB, I stated my views in more detail (e.g. here, here, here, and here); and I also think it's just obvious, once you allow yourself to notice, that the physics we have does not even contain the everyday phenomenon of color, so something has to change. However, it also seems that people won't change their minds until a concrete alternative to physics-as-usual and de facto property dualism actually comes along. Therefore, I have set out to explain how to think like a quantum monadologist, which is what I will call myself.

continue reading »

Link: PRISMs, Gom Jabbars, and Consciousness (Peter Watts)

9 JulianMorrison 11 October 2009 09:51PM

http://www.rifters.com/crawl/?p=791

Morsella has gone back to basics. Forget art, symphonies, science. Forget the step-by-step learning of complex tasks. Those may be some of the things we use consciousness for now but that doesn’t mean that’s what it evolved for, any more than the cones in our eyes evolved to give kaleidoscope makers something to do. What’s the primitive, bare-bones, nuts-and-bolts thing that consciousness does once we’ve stripped away all the self-aggrandizing bombast?

Morsella’s answer is delightfully mundane: it mediates conflicting motor commands to the skeletal muscles.

Don't Think Too Hard.

9 hegemonicon 05 October 2009 03:51AM

I find it interesting that when we're asleep - supposedly unconscious - we're frequently fully conscious, mired in a nonsensical dreamworld of our own creation. There's currently no universally accepted theory for the purpose of dreams - they range from cleaning up mental detritus to subconscious problem solving to cognitive accidents. On the other hand, we DO know plenty about what goes on in the brain during the dream state.

Studies show that in dreams, our thought processes are largely the same as they ones we use when we're awake. The main difference seems to be that we don't notice the insane world that we're a part of. We reason perfectly normally based on our surroundings, we're just incapable of reasoning about those surroundings - we lack metacognition when we're dreaming. The culprit behind this is a brain area known as the dorsolateral prefrontal cortex (DLPFC). It's responsible for, among other things, executive function (directing other brain functions), as well as working memory and motor planning. This combined with the fact that it's the last brain area to develop (meaning it was the last brain area to evolve) suggests that it's key in creating conscious, directed thought. And during sleep, it's shut down, cutting off our ability to question the premises we're given. So, barring entering a lucid dream state, we lack the mental hardware to recognize we're in a hallucination when we dream - it seems perfectly normal.[1]

continue reading »

Would Your Real Preferences Please Stand Up?

42 Yvain 08 August 2009 10:57PM

Related to: Cynicism in Ev Psych and Econ

In Finding the Source, a commenter says:

I have begun wondering whether claiming to be victim of 'akrasia' might just be a way of admitting that your real preferences, as revealed in your actions, don't match the preferences you want to signal (believing what you want to signal, even if untrue, makes the signals more effective).

I think I've seen Robin put forth something like this argument [EDIT: Something related, but very different], and TGGP points out that Brian Caplan explicitly believes pretty much the same thing1:

I've previously argued that much - perhaps most - talk about "self-control" problems reflects social desirability bias rather than genuine inner conflict.

Part of the reason why people who spend a lot of time and money on socially disapproved behaviors say they "want to change" is that that's what they're supposed to say.

Think of it this way: A guy loses his wife and kids because he's a drunk. Suppose he sincerely prefers alcohol to his wife and kids. He still probably won't admit it, because people judge a sinner even more harshly if he is unrepentent. The drunk who says "I was such a fool!" gets some pity; the drunk who says "I like Jack Daniels better than my wife and kids" gets horrified looks. And either way, he can keep drinking.

I'll call this the Cynic's Theory of Akrasia, as opposed to the Naive Theory. I used to think it was plausible. Now that I think about it a little more, I find it meaningless. Here's what changed my mind.

continue reading »

The Zombie Preacher of Somerset

41 Yvain 28 March 2009 10:29PM

Related to: Zombies? Zombies!, Zombie Responses, Zombies: The Movie, The Apologist and the Revolutionary

All disabling accidents are tragic, but some are especially bitter. The high school sports star paralyzed in a car crash. The beautiful actress horribly disfigured in a fire. The pious preacher who loses his soul during a highway robbery.

As far as I know, this last one only happened once, but once was enough. Simon Browne was an early eighteenth century pastor of a large Dissident church. The community loved him for his deep faith and his remarkable intelligence, and his career seemed assured.

One fateful night in 1723, he was travelling from his birthplace in Somerset to his congregation in London when a highway robber accosted the coach carrying him and his friend. With quick reflexes and the element of surprise, Browne and his friend were able to disarm the startled highway robber and throw him to the ground. Browne tried to pin him down while the friend went for help, but in the heat of the moment he used excessive force and choked the man to death. This horrified the poor preacher, who was normally the sort never to hurt a fly.

Whether it was the shock, the guilt, or some unnoticed injury taken in the fight, something strange began to happen to Simon Browne. In his own words, he gradually became:

...perfectly empty of all thought, reflection, conscience, and consideration, entirely destitute of the knowledge of God and Christ, unable to look backward or forward, or inward or outward, having no conviction of sin or duty, no capacity of reviewing his conduct, and, in a word, without any principles of religion or even of reason, and without the common sentiments or affections of human nature, insensible even to the good things of life, incapable of tasting any present enjoyments, or expecting future ones...all body, without so much as the remembrance of the ruins of that mind I was once a tenant in...and the thinking being that was in me is, by a consumption continual, now wholly perished and come to nothing.

Simon Browne had become a p-zombie.

continue reading »

Nonsentient Optimizers

16 Eliezer_Yudkowsky 27 December 2008 02:32AM

Followup to: Nonperson Predicates, Possibility and Could-ness

    "All our ships are sentient.  You could certainly try telling a ship what to do... but I don't think you'd get very far."
    "Your ships think they're sentient!" Hamin chuckled.
    "A common delusion shared by some of our human citizens."
            —Player of Games, Iain M. Banks

Yesterday, I suggested that, when an AI is trying to build a model of an environment that includes human beings, we want to avoid the AI constructing detailed models that are themselves people.  And that, to this end, we would like to know what is or isn't a person—or at least have a predicate that returns 1 for all people and could return 0 or 1 for anything that isn't a person, so that, if the predicate returns 0, we know we have a definite nonperson on our hands.

And as long as you're going to solve that problem anyway, why not apply the same knowledge to create a Very Powerful Optimization Process which is also definitely not a person?

"What?  That's impossible!"

How do you know?  Have you solved the sacred mysteries of consciousness and existence?

"Um—okay, look, putting aside the obvious objection that any sufficiently powerful intelligence will be able to model itself—"

Lob's Sentence contains an exact recipe for a copy of itself, including the recipe for the recipe; it has a perfect self-model.  Does that make it sentient?

"Putting that aside—to create a powerful AI and make it not sentient—I mean, why would you want to?"

Several reasons.  Picking the simplest to explain first—I'm not ready to be a father.

continue reading »

Nonperson Predicates

28 Eliezer_Yudkowsky 27 December 2008 01:47AM

Followup toRighting a Wrong Question, Zombies! Zombies?, A Premature Word on AI, On Doing the Impossible

There is a subproblem of Friendly AI which is so scary that I usually don't talk about it, because very few would-be AI designers would react to it appropriately—that is, by saying, "Wow, that does sound like an interesting problem", instead of finding one of many subtle ways to scream and run away.

This is the problem that if you create an AI and tell it to model the world around it, it may form models of people that are people themselves.  Not necessarily the same person, but people nonetheless.

If you look up at the night sky, and see the tiny dots of light that move over days and weeks—planētoi, the Greeks called them, "wanderers"—and you try to predict the movements of those planet-dots as best you can...

Historically, humans went through a journey as long and as wandering as the planets themselves, to find an accurate model.  In the beginning, the models were things of cycles and epicycles, not much resembling the true Solar System.

But eventually we found laws of gravity, and finally built models—even if they were just on paper—that were extremely accurate so that Neptune could be deduced by looking at the unexplained perturbation of Uranus from its expected orbit.  This required moment-by-moment modeling of where a simplified version of Uranus would be, and the other known planets.  Simulation, not just abstraction.  Prediction through simplified-yet-still-detailed pointwise similarity.

Suppose you have an AI that is around human beings.  And like any Bayesian trying to explain its enivornment, the AI goes in quest of highly accurate models that predict what it sees of humans.

Models that predict/explain why people do the things they do, say the things they say, want the things they want, think the things they think, and even why people talk about "the mystery of subjective experience".

The model that most precisely predicts these facts, may well be a 'simulation' detailed enough to be a person in its own right.

continue reading »

View more: Prev