So I just wound up in a debate with someone over on Reddit about the value of conventional academic philosophy.  He linked me to a book review, in which both the review and the book are absolutely godawful.  That is, the author (and the reviewer following him) start with ontological monism (the universe only contains a single kind of Stuff: mass-energy), adds in the experience of consciousness, reasons deftly that emergence is a load of crap... and then arrives to the conclusion of panpsychism.

WAIT HOLD ON, DON'T FLAME YET!

Of course panpsychism is bunk.  I would be embarrassed to be caught upholding it, given the evidence I currently have, but what I want to talk about is the logic being followed.

1) The universe is a unified, consistent whole.  Good!

2) The universe contains the experience/existence of consciousness.  Easily observable.

3) If consciousness exists, something in the universe must cause or give rise to consciousness.  Good reasoning!

4) "Emergence" is a non-explanation, so that can't be it.  Good!

5) Therefore, whatever stuff the unified universe is made of must be giving rise to consciousness in a nonemergent way.

6) Therefore, the stuff must be innately "mindy".

What went wrong in steps (5) and (6)?  The man was actually reasoning more-or-less correctly!  Given the universe he lived in, and the impossibility of emergence, he reallocated his probability mass to the remaining answer.  When he had eliminated the impossible, whatever remained, however low its prior, must be true.

The problem was, he eliminated the impossible, but left open a huge vast space of possible hypotheses that he didn't know about (but which we do): the most common of these is the computational theory of mind and consciousness, which says that we are made of cognitive algorithms.  A Solomonoff Inducer can just go on to the next length of bit-strings describing Turing machines, but we can't.

Now, I can spot the flaw in the reasoning here.  What frightens me is: what if I'm presented with some similar argument, and I can't spot the flaw?  What if, instead, I just neatly and stupidly reallocate my belief to what seems to me to be the only available alternative, while failing to go out and look for alternatives I don't already know about?  Notably, it seems like expected evidence is conserved, but expecting to locate new hypotheses means I should be reducing my certainty about all currently-available hypotheses now to have some for dividing between the new possibilities.

If you can notice when you're confused, how do you notice when you're ignorant?

New Comment
69 comments, sorted by Click to highlight new comments since: Today at 2:15 AM

I think the error is actually (4). "Emergence" is a non-explanation because it's way too vague; it encompasses many different possible explanations and doesn't narrow things down enough. Because it's a non-explanation in this particular way, you cannot take its inverse. Imagine Sherlock Holmes saying: "'Someone killed him' is a non-explanation, so that can't be it."

Strawson says what he means by emergence, in order to reject it, so this is a side issue.

[-][anonymous]10y-10

Looking at it "retrospectively" (ie: knowing the better answer), I disagree. "Emergence", even if it moved at all, does not beget computation, and even if it did, it wouldn't beget optimization, and even if it did... well then it might beget consciousness, but not without first going through computation and optimization.

So eliminating "emergence" is actually quite correct.

But there's still the problem of noticing when you're ignorant. Given the computational theory of mind, knowing that information is very likely built into the universe just like mass-energy (and that computation is thus built-in as well), it's easy to look at someone getting the wrong answer and laugh at how wrong he is.

What's harder is looking at a problem and noticing how you kinda-sorta have the tools to tackle it, but you don't really.

I think you may be talking past one another. In Jim's view as I understand it (and also, for what it's worth, in mine) the computational physicalist theory of mind is a special case of "emergence", and what's wrong with "emergence" as an answer to the question "where does consciousness come from?" is not that it's false but that it's uninformative.

All "emergence" means on this reading is something like "consciousness is something that happens when the right sorts of physical processes do", which I think is correct (at least as regards our consciousness; perhaps there could be non-physical processes that also produce consciousness; or perhaps not) and which I think is what Fodor says Strawson strenuously denies.

not that it's false but that it's uninformative

It's like eating a tasty cake, and asking: "How did you make this cake?", and receiving an answer: "It's made of atoms."

The answer is completely useless for cooking, and doesn't explain anything about how the person actually made the cake. If someone is using this as an explanation in a cake-making course, they should be fired, because they don't provide useful knowledge.

Technically, the answer remains true.

Also, consciousness has emerged from the interaction of atoms. But without more details, this answer is useless for any practical purpose. Yeah, there are atoms everywhere, and they interact all the time. Sometimes, consciousness emerges. Most of the time, it doesn't. The important question is what makes the difference.

Luckily with consciousness, we have a broad idea of how it emerges: The processing of information (a physically measurable property) by certain structures with many feedback loops (particularly brains) causes the emergence of consciousness; and in particular saying that it's emergent tells us that consciousness relates to the structure AND dynamics of the brain, and not to some intangible glob of soul attached to it.

So saying "emergence" plus the one sentence above actually gets us really far out of the land of mysticism.

This is a statement of materialism, not evidence or an argument that it is true.

The evidence for consciousness being a material phenomenon -- the only evidence, as far as I can see -- is the remarkable correspondence observed between physical brain phenomena and mental phenomena. But we have no knowledge of how matter produces consciousness. The hard example against which materialist woo (not all woo is mystical woo) generally fails is, "does this purported explanation predict the absence of consciousness in the cerebellum and motor control?"

ETA: Compare this with the materialist claim that the heart is a pump. Leonardo da Vinci could see that that it's a pump just by dissecting it. We can see the mechanism, model it, show how electrical signals trigger the beats, understand dysfunctions like fibrillation, implant artificial pacemakers that work pretty well, and use external pumps to take over the function during heart surgery. We know how the heart works to produce the "emergent" property of pumping blood, and we can make pumps to perform the same function. All of this is so far missing from our knowledge of consciousness.

The evidence of the materialist view seems very strong to me; in particular, pretty much all of neuroscience bears it out; as you note there is:

the remarkable correspondence observed between physical brain phenomena and mental phenomena

And I disagree that it fails at

"does this purported explanation predict the absence of consciousness in the cerebellum and motor control?"

Comparative neuroscience between species or patients with certain types of brain damage really does give us a concrete idea of how "more complex" and "higher-order" cognition, at the very least part of the puzzle of consciousness, correlates with the presence of certain types of anatomical structures.

Is the brain much more complex than a pump? Sure. Does that mean that any hypothesis is anywhere near the purely materialist one? No. And even weird quantum effects, though there's no strong evidence for them, still fall under the umbrella of materialism.

Just to be clear, I'm not arguing against materialism, just pointing out that we have no idea how it works.

"does this purported explanation predict the absence of consciousness in the cerebellum and motor control?"

Comparative neuroscience between species or patients with certain types of brain damage really does give us a concrete idea of how "more complex" and "higher-order" cognition, at the very least part of the puzzle of consciousness, correlates with the presence of certain types of anatomical structures.

A catalogue of brain regions that do correspond to conscious experience and those that do not does not amount to an explanation of how those that do, do, and those that don't, don't.

A catalogue of brain regions that do correspond to conscious experience and those that do not does not amount to an explanation of how those that do, do, and those that don't, don't.

Not just a catalogue; an understanding of their anatomical differences at the macroscopic and microscopic level, detailed studies of their electrical activities, and soon enough a neuron-level connectome to complement ever-more-fine-grained monitoring of electrical activity. This would provide the means to match more and more experiences to specific neuronal activity (or large complex - but still quantifiable - patterns of neuronal activity), including activities like deep introspection, meditation and creative work.

In the more distant future, a brain simulation that behaves like a person would be very strong evidence of the materialist view. If the only Chinese Room or Philosophical Zombie objections remain, then I'd consider the question of consciousness solved or at least dissolved.

The evidence of the materialist view seems very strong to me; in particular, pretty much all of neuroscience bears it out; as you note there is:

the remarkable correspondence observed between physical brain phenomena and mental phenomena

You can make the same argument about radios or other devices that are relays for information. Without understanding how a radio works it's really hard to know that the content that the radio plays isn't an emergent phenomena.

You can make the same argument about radios or other devices that are relays for information. Without understanding how a radio works it's really hard to know that the content that the radio plays isn't an emergent phenomena.

When a radio is damaged, all that is affected is the clarity or the presence of the material that is being transmitted. There is no damage to a radio that would make spoken word material sound just the same, except that all nouns naming animals were garbled. The material coming over the radio has aspects to it that malfunctions of the radio may obscure but never manipulate.

In contrast, the correspondences found between brain damage and phenomena of consciousness suggest a very broad connection of the brain to the hypothetical soul, a connection so broad that there seems little work left for the soul to do. "Brain as the antenna of the soul" is at present looking very like "God in the gaps".

I wouldn't see the names of animals as phenomena of consciousness. I would rather label them mental phenomena.

Plenty of people meditate in an effort to raise their level of consciousness and transcend the mind that goes around and labels and judges.

I wouldn't see the names of animals as phenomena of consciousness. I would rather label them mental phenomena.

I don't know what distinction you're drawing there. I cannot find different meanings to attach to the phrases "mental phenomena" and "phenomena of consciousness".

Plenty of people meditate in an effort to raise their level of consciousness and transcend the mind that goes around and labels and judges.

I don't know what "raise their level of consciousness and transcend the mind" means either. Labelling and judging are ordinary functions of the mind. I can grok that the name is not the thing without having to regard "naming" as some sort of newage sin.

I don't know what "raise their level of consciousness and transcend the mind" means either.

That's the point. If you are not familiar with the meaning of the terms that the other side of the debate uses, it's hard to understand arguments.

Labelling and judging are ordinary functions of the mind.

The mind is generally considered as something distinct from consciousness by those people who meditate a lot and have developed a certain kind of self awareness in the process.

I don't think that's a gap of understanding that can be fixed easily, because it's about gathering reference experiences.

The mind is generally considered as something distinct from consciousness by those people who meditate a lot and have developed a certain kind of self awareness in the process.

I don't think that's a gap of understanding that can be fixed easily, because it's about gathering reference experiences.

Well, I have tried. None of the descriptions that I have read of the results of meditation match up to anything I have experienced. The various things I've read do not seem to agree with each other either. Do these people who meditate a lot even know what the others are talking about? Or am I looking at the equivalent of cryptozoologists describing the characteristics of the Loch Ness Monster?

Another point: If you ask a bunch of people on this forum to describe what they mean with rationality, utility and uncertainty the descriptions that you will get are not identical. That doesn't mean that those words have no meaning.

Talk about meditative experiences faces a difficulty not faced by those topics. We can all agree on what Bayes theorem and the VNM theorem are and that they are theorems, that the conjunction fallacy is a fallacy, that entanglement with reality is a necessary condition of acquiring knowledge about reality, and so on. There are open issues, such as whether utilitarianism, and if so what sort, is either descriptively or normatively sensible for humans or AIs, but it is easy to discuss such things and agree on what we are talking about, even if we do not agree about what is true about them. Even if we are drawing lines on our maps differently, we can discover that fact, and align them for the purposes of any particular discussion. LessWrong could not exist in the form it does if this were not so. Instead, it would be nothing more than Eliezer's personal gurublog, and the bragging threads could not exist.

None of this is true of meditation.

Meditation explores the inside of one's own mind. This is also something that objectively exists, but each person's is private to them, and they cannot exhibit to anyone else what they find there, only talk about it in terms that may not map well to anyone else's experience. There are no theorems, and few empirical observations to agree on, which makes it rich terrain for cultivating woo. As an anti-woo touchstone, "How does this putative guru lead his everyday life?" is a start, but doesn't go beyond eliminating some of the junk. Is there any meditation forum that has a regular bragging thread, for people to announce the awesome things they did recently as a result of their practice? The mind boggles (but does so in a place where no-one else can see it).

One example of apparently differing experiences. A frequent observation made in what I have read is that the self is an illusion and with practice one can penetrate this illusion. That is the direct opposite of what I experience when I meditate. So, which of us is doing it wrong and becoming more mired in illusion, and which is doing it right and perceiving more accurately? This is not something I am willing to take an "outside view" on, i.e. to reject both my own experience, and the very idea of discovering the truth of the matter, in favour of going along with what other people say about theirs.

While I find the subject interesting, I have never yet found anything in other people's material to repay that interest, even from the intersection of the meditative and rationalist communities.

Yes, you are right that talking about meditation is hard and might be harder than what we are doing here.

On the other hand imagine someone without any math background at all reading our discussions about Bayes theorem and the VNM theorem. Do you think that person would get the impression that we all basically agree?

On the other hand imagine someone without any math background at all reading our discussions about Bayes theorem and the VNM theorem. Do you think that person would get the impression that we all basically agree?

There's plenty we do all agree on, such as what the VNM theorem says. And there are things we don't, such as whether VNM implies we all really have utility functions. If someone is reading LessWrong without the background to meaningfully participate, that's their problem, not ours. But they can solve that problem simply by reading up on the background, just as you or I can if some empirical subject comes up here that we aren't familiar with.

But how would one get "up to speed" on the subject of meditation? I have read, I have practiced, I have meditated with others. But still, my experience does not join up with anyone else's that I know of. I might as well be exploring a different continent. How many different continents are there in this space? Does anyone even know?

Imagine that human colour vision was highly polymorphic, with different people having different sets of colour receptors, sensitive to different wavelengths anywhere in the range from infrared to ultraviolet, and no one version being preponderant. Communicating what it is like to experience different colours would be difficult, but even there it would be easy to objectively demonstrate differences. Some people would, and some would not, be able to distinguish various pairs of objects. In the real world, how would one go about testing a hypothesis of mental polymorphism?

I personally started with meditation 10 years ago by reading a book from the Aikido master Tohei. It was good enough that I continued the practice from time to time.

Two and a half years ago I started attending group for somatic-psychoeducation regularly. It's a framework developed by a Frenchman called Danis Bois. The interesting thing about Danis is that even being accomplished in teaching meditation and bodywork he thought that a lot of the esoteric crowd was too dogmatic and close minded so he went to university studying academic pedagogy. He's now a professor at a Portuguese university.

I learned a lot in those 2 1/2 years. When I read the book that supposed to be an introduction into the method half a year into it, I couldn't do much with it. Now the book makes more sense. I do know from experience that the process isn't easy. This year I think I got a grasp about what Buddhist might mean when they say Karma and how Karma fits into a framework where everything is to be supposed to be accessible through direct experience.

If you can find someone doing somatic-psychoeducation I recommend it, but quick Googling shows nobody in Norwich.

As far as the Indian tradition goes, they do something called Satsang. Good satsang teachers usually have a kind of charisma that the average person can perceive and that's impressive to some people who do feel emotions naturally. If you aren't neurotypical get a neurotypical person along to see whether they feel the charisma. A teacher without his own spiritual experience who just reiterated what he read somewhere won't have that charisma.

If you can find a good Satsang session sitting in and asking questions with a goal of trying to predict answers, might be a good way to learn the framework even if you don't completely take it for yourself.

I don't subscribe to perennialism according to which all spiritual tradition say the same thing. At the same time there are things that are common over multiple traditions.

As far as written descriptions go, a written description of the nature of the color red doesn't give a blind person a real idea of what red looks like even when it's written in braille. I don't think that any decent spiritual tradition works simply through reading descriptions. Most have a least some instance of teaching via questions & answers.

The comment has about emergentism, but your reply was about soul theory, which is quite different.

Strong emergentism is notoriously badly defined, but a typical version might include:

1 mental phenomena are irreducible, or have an irreducible component

2 mental phenomena are not predictable from neural activity by standard physical laws

3 mental phenomena phenomena are related to neural activity by special psychophysical laws

Note that 3 guarantees a close relationship between neural activity and consciousness.

Strong emergentism is notoriously badly defined, but a typical version might include:

...

3 mental phenomena phenomena are related to neural activity by special psychophysical laws

Is anyone claiming to have found any yet?

No, but that's another issue, again.

You can make the same argument about radios or other devices that are relays for information. Without understanding how a radio works it's really hard to know that the content that the radio plays isn't an emergent phenomena.

I do not get this analogy. We know quite a bit about how the brain work at the neuronal level. A rigorous program of research exists that should gives us an understanding of increasingly coarse modules over time. Simulating a brain in silico is an eventually-achievable method to extensively test almost any hypothesis we could develop.

When I say consciousness is emergent I'm saying that I believe neuroscience will eventually be able to pinpoint the mechanisms of almost any type of higher-order thought, and come up with as-useful-as-is-possible a definition of things like qualia and self-awareness, and that these mechanisms will all relate to complex, dynamic neuronal and chemical behavior in the brain.

An example non-emergent explanation of consciousness would be "the brain is an antenna for ethereal souls", which would be hard to test but would have to be given consideration if the program I outline above completely fails to fully account for thoughts and experiences above a certain complexity.

Simulating a brain in silico is an eventually-achievable method to extensively test almost any hypothesis we could develop.

You just assume that's true. Before we actually do run that simulation in practice we don't know whether that's true.

When I say consciousness is emergent I'm saying that I believe neuroscience will eventually be able to pinpoint the mechanisms of almost any type of higher-order thought

Yes, and other people do believe in souls and God. We don't have evidence that proves either hypothesis.

An example non-emergent explanation of consciousness would be "the brain is an antenna for ethereal souls", which would be hard to test but would have to be given consideration if the program I outline above completely fails to fully account for thoughts and experiences above a certain complexity.

Yes, and the brain as an antenna hypothesis is basically what parapsychologists like Dean Radin advocate these days. We don't have yet evidence to prove it wrong.

Saying we could in theory run experiments that if those experiments turn out a certain way would prove our theory right is not the same thing as arguing that there evidence for your theory.

Science lives from distinguishing what you know and what you don't know.

I am making predictions, but they are predictions that a concrete, existing program of research (the field of neuroscience) is trying to test.

I obviously can't conjure this evidence out of thin air, because it doesn't yet exist (and, sure, may never exist). But I am outlining why I believe that calling consciousness emergent is a perfectly valid, predictive hypothesis in the context of neuroscience (saying 'phenomena X is emergent' is, I believe, not an empty statement at all but instead more-or-less equivalent to saying "The question 'What singular external thing causes phenomena X' should be dissolved'; with panpsychism being the anti-emergent hypothesis in this case).

And I also believe that emergent consciousness is more likely to be the correct view, and I hope I've given clear reasons why that's so.

You're using "emergent" to mean "reductiomistic", which is pretty much the opposite

I think you don't understand what emergent means. Traffic jams emerge from individual drivers' behavior for instance.

Emergent has more than one meaning.

Are you actually confused by my terminology (in which case I'll clarify) or are you just being pedantic?

I am pointing out something which may stop you getting into pointless discussions with people who use the word differently.

Cognition is the Easy Problem.

If I made an in silico simulation of a human brain that could convincingly match human cognition, what would stop you from believing that it was also conscious?

I wouldn't say it wasn't and I wouldn't say it was.

A functional duplicate of a qualiaphile would report qualia, even if it didn't have them, and a functional duplicate of a qualiaphobe would deny it had qualia even if it did.

Eta:

In other words, everything is predictable from who's brain is emulated. We need some other test.

Still, isn't "emerges" even there a shortcut for "then it somehow happens... but I don't know how specifically"?

To rephrase what you wrote:

If information is processed by certain structures with many feedback loops (like brains) then... sometimes... I am not sure what specific conditions are necessary... consciousness happens.

Of course it feels much less convincing when written this way. As it should. Because it honestly admits that I actually don't know the details, and maybe some critical part is still missing.

It's incomplete, but that's okay.

Concretely, this hypothesis tells us to look at physical structures and neuronal activity in the brain and compare them across individuals and species, and that this

The field of neuroscience seems to bear this out. Our perceptions and emotions are accounted for by brain activity. Seemingly deeper issues like memory formation and temporal perception, have been successfully localized and understood to a great degree.

In particular, even though we don't understand there's no obvious gaps. The phenomena we still don't understand seem hard to understand because they are high-level or occur very diffusely, not because they aren't generated by the activity of neurons in the brain (hypothetical contradictory evidence would be people reporting some type of highly distinctive experience [say out-of-body, or less mystically deja vu] while an MRI shows no deviation from normal resting activity).

"Accounted for" is ambiguous between "correlated with", and "explained by".

By perceptions I mean our senses and by emotions I mean broad emotions like sadness, anger and excitement.

These sorts of lower-level experiences, which are also present in animals, are fully accounted for, correlated with AND explained by neural and chemical activity in the brain. By reading the electrical activity of your neurons, I could figure out what you were seeing. By electrically stimulating a certain part of the brain, I could make you feel angry or happy or sad.

This level of deep mechanistic understanding seems to be coming for other phenomena, but yes that is a prediction of the future so no I can't prove it right this second.

Train a blind from birth person in your technique.

Hand them a braille readout of the neural activity of someone looking at a tomato.

Would they now know how red the things look to a sighted person?

Yes, they could easily tell the distribution of color receptor activation.

Not what I asked.

Then what are you asking. Please, precisely define what it would mean to "know how red the things look".

Look at a tomato.

That's how a red thing looks.

The problem was, he eliminated the impossible, but left open a huge vast space of possible hypotheses that he didn't know about (but which we do): the most common of these is the computational theory of mind and consciousness, which says that we are made of cognitive algorithms.

Wait, are you suggesting that the reviewer (Jerry Fodor) is unaware of the computational theory of mind? Unlikely, given that he is one of its progenitors. From the wikipedia article on the computational theory:

The theory was proposed in its modern form by Hilary Putnam in 1961, and developed by the MIT philosopher and cognitive scientist (and Putnam's PhD student) Jerry Fodor in the 1960s, 1970s and 1980s

Yep. Eli failed to consider the hypothesis that the philosophers who reject CTM do so because they have objections to it, rather than because they have never heard if it.

If you can notice when you're confused, how do you notice when you're ignorant?

Have you noticed when YOU are confused?

1) The universe is a unified, consistent whole. Good!

"Good"? What does the statement even mean? What would be an alternative? non-unified whole? unified parts? non-unified bits and pieces? How would you tell?

2) The universe contains the experience/existence of consciousness. Easily observable.

Depends on your definition of consciousness. Is it one of the qualia? An outcome on the mirror test? Something else? If it's a quale, do qualia exist in the same way physical things do? The statement above is meaningless without specifying the details.

3) If consciousness exists, something in the universe must cause or give rise to consciousness. Good reasoning!

Eh, bad reasoning. Depends on the definition of "cause", which is more logic than physics. Causality in physics is merely a property of a certain set of the equations of motion, which is probably not what is meant in the above quote.

4) "Emergence" is a non-explanation, so that can't be it. Good!

Bad. Emergence "as a process whereby larger entities, patterns, and regularities arise through interactions among smaller or simpler entities that themselves do not exhibit such properties" does not necessarily imply irreducibility, so even if we can reduce humans to quarks, humans can have properties which quarks don't. Anyway, I grant this one if it means "everything is reducible" and nothing more. Of course, the reduced constituents are not required to have all the properties of the whole.

5) Therefore, whatever stuff the unified universe is made of must be giving rise to consciousness in a nonemergent way.

Presumably this means "in a non-dualist way", i.e. a complex enough optimizer is not granted consciousness by some irreducible entity.

6) Therefore, the stuff must be innately "mindy".

To argue against it, you don't need "the computational theory of mind and consciousness", just note that, say, atoms are not innately solid or liquid, so some properties of complex systems are meaningless when applied to its constituents.

Of course, maybe I am the one who is confused and not noticing it...

"Emergence" means different things to different people. Yep, this is an argument about definitions...

We can explain how to build solidity and liquidity out of atoms.

If you want to argue for CTM, it would help explain how Red and Painful and Itchy are built out of bits and bytes.

That seems to be within the domain of neuroscience (for what physiologically is going on in "itchy" and how "itchy" sensations are distinguished in the nervous system from "painful" or "red" ones) and possibly neurolinguistics (for how we acquire the category "itchy" and learn to refer to it when we describe our sensations to ourselves or others).

There might be a sideline of some other branch of psychology for why people get so damn defensive about the idea that their ego is a Real Thing that Really Has Real Experiences as opposed to a cogno-intellectual process running on a symbiotic ape brain.

Neuroscience can match off known neural activity to known sensations on aposteriori evidence, but it cannot provide a principled and predictive explanation of why a particular neural event should feel a particular way.

How we verbally categorise phenomenal feels is also not the hard problem.

The ego is also not the hard problem. You might want to say that egos don't exist, but it seems to us thatvwe have them, or we feel we have them. That is a dissolution of the ego, not of qualia.

I don't think you should so much notice that you're ignorant as assume you're ignorant. You always assign some probability to "something I haven't thought of". You do need to notice when you're making an implicit assumption that you've thought of everything. And you need to figure out how much probability to assign to things you haven't thought of.

I don't think there's any good theoretical way to figure out how likely it is that the answer is something that you haven't thought of. You just have to practice. I'm not sure how you can practice.

You should probably be skeptical when presented with binary hypotheses (either by someone else or by default). Say in this example that H1 is "emergence". The alternative for H1 isn't "mind-stuff" but simply ~H1. This includes the possibility of "mind-stuff" but also any alternatives to both emergence and mindstuff. Maybe a good rule to follow would be to assume and account for your ignorance from the beginning instead of trying to notice it.

One way to make this explicit might be to always have at least three hypotheses: One in favor, one for an alternative, and a catchall for ignorance; the catchall reflecting the little that you know about the subject. The less you know about the subject, the larger your bucket.

Maybe in this case, your ignorance allocation (i.e. prior probability for ignorance) is 50%. This would leave 50% to share between the emergence hypothesis and the mindstuff hypothesis. I personally think that the mindstuff hypothesis is pretty close to zero, so the remainder would be in favor of emergence, even if it's wrong. In this case, "emergence" is asserted to be a non-explanation, but this could probably be demonstrated in some way, like sharing likelihood ratios; that might even show that "mindstuff" is an equally vapid explanation for consciousness.

Once you come to a conclusion, try to apply it to make a prediction or even just see whether it could've been used to predict some previously known things. If not, you're still ignorant.

Of course this is hard in a field where almost none of your interlocutors consider making predictions to be a useful thing

[-][anonymous]10y60

If you can notice when you're confused, how do you notice when you're ignorant?

I think one tricky thing about this question is there are cases where I am ALWAYS ignorant, and the question to ask instead is, is my ignorance relevant? I mean, I tried to give a short example of this with a simple question, below, but ironically, I was ignorant about how many different ways you could be ignorant about something until I started trying to count them, and I'm likely still ignorant about it now.


For instance, take the question: What is my spouse's hair color?

Presumably, a good deal of people reading this are somewhat ignorant about that.

On the other hand, they probably aren't as ignorant as a blind visiting interstellar Alien, Samplix, who understands English but nothing about color, although Samplix has also been given a explanation of hexadecimal color chart and has decided to guess the RGB values of my spouse's hair is #66FF00.

But you could also have another blind alien, Sampliy, who wasn't given even given a color chart, doesn't understand what words are colors and what words aren't, and so goes to roughly the middle of a computer English/Interstellar dictionary and guesses "Mouse?"

Or another visiting Alien, Sampliz, who doesn't understand English and so responds with '%@%$^!'

And even if you know my spouse has black hair, you could get more specific than that:

For instance, a Hair analyzing computer might be able to determine that my spouse has approximately 300,000 hairs, and 99% of them happen to be the Hexadecimal shade #001010, but another, more specific Hair Analyzing computer, might say that my spouse has 314,453 hairs, and 296,415 of them are Hexadecimal shade #001010. and 10,844 of them are Hexadecimal shade #001011, and...

And even if you were standing with that report from the second computer saying "Okay, it finished it's report, and I have this printout from an hour ago, so I am DEFINITELY not ignorant about your spouse's hair color."

Well, what if I told you my spouse just came back from a Hair salon?


The above list isn't exhaustive, but I think it establishes the general point. My spouse's hair color seems like the kind of question which someone could be ignorant about in less ways than something as confusing as consciousness, and yet... even spousal hair color is complicated.

I think there's a relevant difference here between being ignorant of actual data that you are aware exists (e.g. the color of hair), and being ignorant of the existence of alternative theories or models (e.g. possible alternative meanings of the word "color").

[-][anonymous]10y50

That seemed to make sense to me at first, but I'm having a hard time actually finding a good dividing line to show the relevant difference, particularly since what seems like it can be model ignorance for one question can be data ignorance for another question.

For instance, here are possible statements about being ignorant about the question. "What is my spouse's hair color?"

1: "I don't know your spouse's hair color."

2: "I don't know if your spouse has hair."

In this context, 1 seems like data ignorance, and 2 would seem like model ignorance.

But given a different question "Does my spouse have hair?"

2 is data ignorance, and 1 doesn't seem to be a well phrased response.

And there appear to be multiple levels of this as well: For instance, someone might not know whether or not I have a spouse.

What is the best way to handle this? Is it to simply try to keep track of the number of assumptions you are making at any given time? That seems like it might help, since in general, models are defined by certain assumptions.

I've sometimes found it productive to explicitly add "the hypothesis that hasn't occurred to me" to the list. To remind me there is (at least) one.

[-][anonymous]10y40

I sort of just always assume that my current hypotheses are a waypoint on the road to greater understanding. I'm not confident in the things I don't know,as there's always the possibility of unknown unknowns.

What frightens me is: what if I'm presented with some similar argument, and I can't spot the flaw?

Having recognized this danger, you should probably be more skeptical of verbal arguments.

[-][anonymous]10y20

Of course panpsychism is bunk.

Is it?

(but what jimrandomh says is still correct)

I would like an explicitation for the reasons why it seems to be false. In particular I fail to see how computational account would be against it. You can compute with levers, transistors and a large array of different things. And actually there is no things you can't compute with. Thus you can compute with anything. So anything is "computy" which is another way of saying it's "mindy". But ofcourse that everything can be used in computing doesn't mean the computations are of equal value/complexity. Thus a genuine difference between rocks and people. But that it still allows that there is "what it feels to be a rock inside". Granted it probably isn't anything grand or interesting. However it would be really weird if there was a clear division where "feeling" began and "cold" motion stopped.

I would like to note that an abstraction where we disregard "feelings" and focus on technical public impact with the environment can lead to a "cold" conceptation of the world. However when used as a worldview (that is outside of tracking positions and mechanics) it is quite erroneus. In an extreme extrapolation you are just a robot and should be "cold". This kind of non-psychisim has the loudest counterevidence there is available - you do feel (crossing fingers that you are not a zombie). Whether the psychisims extends beyond you is an open question. If you can get around the problem of other minds that is existence of psychisims like you why would you assume that there are only feelers like you? Ie there is an analog problem of mindness of other, given that you could not directly experience the feelings of rocks why would you assume they don't have them?

If the answer is purely because you are used to abstract that facet of them away because of practical needs that doesn't answer the theorethical question. It is the same that a psychopath would treat fully fledged people - to him it doesn't matter what people are on the inside only what he can do with them. In that way the "cold" and "feely" way of relating to your surroundings don't disagree what the mechanics are. But why insist that the "feely" way is false or inferior?

Anything is potentially computy.which is analogous to panPROTOpsychism.

If it isn't computing for me it isn't computing?

Rejecting hypothesis can only bring you to a state where you don't know what's going on. It's not constructive in a way where it bring you to the conclusion that one of the alternatives is true.

It would probably make sense to say: I don't know over a wider array of questions.

If you can notice when you're confused, how do you notice when you're ignorant?

I actually have a specific feeling associated with everything clicking together. If I don't have that feeling, my model does not perfectly explain everything which means there's something I'm not considering. In that case, I go looking for alternative hypotheses.

If you can notice when you're confused, how do you notice when you're ignorant?

You don't need to notice that you're ignorant if you already know that you are.

One of the structural commitments of Korzybski (of the The Map is not the Territory fame) is that abstractions always leave out some facts. My concepts of a thing is not the thing itself - the map is not the territory. That consciousness of abstraction entails a consciousness of ignorance.

When he had eliminated the impossible, whatever remained, however low its prior, must be true.

Eliminated by his calculations, with his priors, with his abstractions. What's the probability that those are wrong? What's the probability that he hadn't taken into account everything. And then, what's the chance that he hadn't been thorough enough in his enumeration of "whatever remained"?

Jaynes has a nice example of rejecting "whatever remained", by putting a something else theory into the analysis, and assigning some small probability to it.

Also, like Korzybski, Jaynes encourages a consciousness of abstraction by conditioning all probabilities on background knowledge I, as in P(X | a_1,a_2,......, I). There's my background knowledge I, staring back at me. What if it's incorrect?

So there are two main failures in these proof by contradiction scenarios. The first is to fail to include a valid alternative. The second is that your I, your model and assumptions, suck. They are wrong, or worse, not even wrong.

Philosophers aren't actually ignorant of computational theories of mind. Some of them reject CTM , because it seems have no more ability address qualia/hard problem issues than materialism ( in fact, one can robustly argue that compuationalism doesn't add anything to materialism in terms of powers or properties, and that CTM is therefore less able to explain qualia than straight materialism).

So, before LW starts shouting about the stupidity of philosophers, LW needs to say something about the Hard Problem.

At the moment there isn't even a consensus.

Eta: having re-read Fodors review, I notice there are frequent references to the hard problem issues, qualia than, conscious experience, etc. I am not sure whether Eli thinks they're unimportant, or thinks the CTM explains them , or what.

panpsychism is bunk.

Panpsychism is the least defensible of a set of related concepts.