You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

How do you notice when you are ignorant of necessary alternative hypotheses?

16 [deleted] 24 June 2014 06:12PM

So I just wound up in a debate with someone over on Reddit about the value of conventional academic philosophy.  He linked me to a book review, in which both the review and the book are absolutely godawful.  That is, the author (and the reviewer following him) start with ontological monism (the universe only contains a single kind of Stuff: mass-energy), adds in the experience of consciousness, reasons deftly that emergence is a load of crap... and then arrives to the conclusion of panpsychism.

WAIT HOLD ON, DON'T FLAME YET!

Of course panpsychism is bunk.  I would be embarrassed to be caught upholding it, given the evidence I currently have, but what I want to talk about is the logic being followed.

1) The universe is a unified, consistent whole.  Good!

2) The universe contains the experience/existence of consciousness.  Easily observable.

3) If consciousness exists, something in the universe must cause or give rise to consciousness.  Good reasoning!

4) "Emergence" is a non-explanation, so that can't be it.  Good!

5) Therefore, whatever stuff the unified universe is made of must be giving rise to consciousness in a nonemergent way.

6) Therefore, the stuff must be innately "mindy".

What went wrong in steps (5) and (6)?  The man was actually reasoning more-or-less correctly!  Given the universe he lived in, and the impossibility of emergence, he reallocated his probability mass to the remaining answer.  When he had eliminated the impossible, whatever remained, however low its prior, must be true.

The problem was, he eliminated the impossible, but left open a huge vast space of possible hypotheses that he didn't know about (but which we do): the most common of these is the computational theory of mind and consciousness, which says that we are made of cognitive algorithms.  A Solomonoff Inducer can just go on to the next length of bit-strings describing Turing machines, but we can't.

Now, I can spot the flaw in the reasoning here.  What frightens me is: what if I'm presented with some similar argument, and I can't spot the flaw?  What if, instead, I just neatly and stupidly reallocate my belief to what seems to me to be the only available alternative, while failing to go out and look for alternatives I don't already know about?  Notably, it seems like expected evidence is conserved, but expecting to locate new hypotheses means I should be reducing my certainty about all currently-available hypotheses now to have some for dividing between the new possibilities.

If you can notice when you're confused, how do you notice when you're ignorant?

Comments (69)

Comment author: jimrandomh 24 June 2014 06:34:04PM 30 points [-]

I think the error is actually (4). "Emergence" is a non-explanation because it's way too vague; it encompasses many different possible explanations and doesn't narrow things down enough. Because it's a non-explanation in this particular way, you cannot take its inverse. Imagine Sherlock Holmes saying: "'Someone killed him' is a non-explanation, so that can't be it."

Comment author: TheAncientGeek 25 June 2014 03:23:44PM *  1 point [-]

Strawson says what he means by emergence, in order to reject it, so this is a side issue.

Comment author: [deleted] 24 June 2014 06:38:25PM -2 points [-]

Looking at it "retrospectively" (ie: knowing the better answer), I disagree. "Emergence", even if it moved at all, does not beget computation, and even if it did, it wouldn't beget optimization, and even if it did... well then it might beget consciousness, but not without first going through computation and optimization.

So eliminating "emergence" is actually quite correct.

But there's still the problem of noticing when you're ignorant. Given the computational theory of mind, knowing that information is very likely built into the universe just like mass-energy (and that computation is thus built-in as well), it's easy to look at someone getting the wrong answer and laugh at how wrong he is.

What's harder is looking at a problem and noticing how you kinda-sorta have the tools to tackle it, but you don't really.

Comment author: gjm 24 June 2014 07:04:21PM 16 points [-]

I think you may be talking past one another. In Jim's view as I understand it (and also, for what it's worth, in mine) the computational physicalist theory of mind is a special case of "emergence", and what's wrong with "emergence" as an answer to the question "where does consciousness come from?" is not that it's false but that it's uninformative.

All "emergence" means on this reading is something like "consciousness is something that happens when the right sorts of physical processes do", which I think is correct (at least as regards our consciousness; perhaps there could be non-physical processes that also produce consciousness; or perhaps not) and which I think is what Fodor says Strawson strenuously denies.

Comment author: Viliam_Bur 25 June 2014 09:46:27AM *  2 points [-]

not that it's false but that it's uninformative

It's like eating a tasty cake, and asking: "How did you make this cake?", and receiving an answer: "It's made of atoms."

The answer is completely useless for cooking, and doesn't explain anything about how the person actually made the cake. If someone is using this as an explanation in a cake-making course, they should be fired, because they don't provide useful knowledge.

Technically, the answer remains true.

Also, consciousness has emerged from the interaction of atoms. But without more details, this answer is useless for any practical purpose. Yeah, there are atoms everywhere, and they interact all the time. Sometimes, consciousness emerges. Most of the time, it doesn't. The important question is what makes the difference.

Comment author: Punoxysm 25 June 2014 10:02:59PM *  1 point [-]

Luckily with consciousness, we have a broad idea of how it emerges: The processing of information (a physically measurable property) by certain structures with many feedback loops (particularly brains) causes the emergence of consciousness; and in particular saying that it's emergent tells us that consciousness relates to the structure AND dynamics of the brain, and not to some intangible glob of soul attached to it.

So saying "emergence" plus the one sentence above actually gets us really far out of the land of mysticism.

Comment author: Viliam_Bur 26 June 2014 08:23:27AM *  4 points [-]

Still, isn't "emerges" even there a shortcut for "then it somehow happens... but I don't know how specifically"?

To rephrase what you wrote:

If information is processed by certain structures with many feedback loops (like brains) then... sometimes... I am not sure what specific conditions are necessary... consciousness happens.

Of course it feels much less convincing when written this way. As it should. Because it honestly admits that I actually don't know the details, and maybe some critical part is still missing.

Comment author: Punoxysm 26 June 2014 07:54:23PM -1 points [-]

It's incomplete, but that's okay.

Concretely, this hypothesis tells us to look at physical structures and neuronal activity in the brain and compare them across individuals and species, and that this

The field of neuroscience seems to bear this out. Our perceptions and emotions are accounted for by brain activity. Seemingly deeper issues like memory formation and temporal perception, have been successfully localized and understood to a great degree.

In particular, even though we don't understand there's no obvious gaps. The phenomena we still don't understand seem hard to understand because they are high-level or occur very diffusely, not because they aren't generated by the activity of neurons in the brain (hypothetical contradictory evidence would be people reporting some type of highly distinctive experience [say out-of-body, or less mystically deja vu] while an MRI shows no deviation from normal resting activity).

Comment author: TheAncientGeek 27 June 2014 12:14:08PM 2 points [-]

"Accounted for" is ambiguous between "correlated with", and "explained by".

Comment author: Punoxysm 27 June 2014 05:06:45PM 0 points [-]

By perceptions I mean our senses and by emotions I mean broad emotions like sadness, anger and excitement.

These sorts of lower-level experiences, which are also present in animals, are fully accounted for, correlated with AND explained by neural and chemical activity in the brain. By reading the electrical activity of your neurons, I could figure out what you were seeing. By electrically stimulating a certain part of the brain, I could make you feel angry or happy or sad.

This level of deep mechanistic understanding seems to be coming for other phenomena, but yes that is a prediction of the future so no I can't prove it right this second.

Comment author: TheAncientGeek 27 June 2014 05:23:24PM *  1 point [-]

Train a blind from birth person in your technique.

Hand them a braille readout of the neural activity of someone looking at a tomato.

Would they now know how red the things look to a sighted person?

Comment author: RichardKennaway 26 June 2014 10:23:50AM *  3 points [-]

This is a statement of materialism, not evidence or an argument that it is true.

The evidence for consciousness being a material phenomenon -- the only evidence, as far as I can see -- is the remarkable correspondence observed between physical brain phenomena and mental phenomena. But we have no knowledge of how matter produces consciousness. The hard example against which materialist woo (not all woo is mystical woo) generally fails is, "does this purported explanation predict the absence of consciousness in the cerebellum and motor control?"

ETA: Compare this with the materialist claim that the heart is a pump. Leonardo da Vinci could see that that it's a pump just by dissecting it. We can see the mechanism, model it, show how electrical signals trigger the beats, understand dysfunctions like fibrillation, implant artificial pacemakers that work pretty well, and use external pumps to take over the function during heart surgery. We know how the heart works to produce the "emergent" property of pumping blood, and we can make pumps to perform the same function. All of this is so far missing from our knowledge of consciousness.

Comment author: Punoxysm 26 June 2014 07:59:57PM 2 points [-]

The evidence of the materialist view seems very strong to me; in particular, pretty much all of neuroscience bears it out; as you note there is:

the remarkable correspondence observed between physical brain phenomena and mental phenomena

And I disagree that it fails at

"does this purported explanation predict the absence of consciousness in the cerebellum and motor control?"

Comparative neuroscience between species or patients with certain types of brain damage really does give us a concrete idea of how "more complex" and "higher-order" cognition, at the very least part of the puzzle of consciousness, correlates with the presence of certain types of anatomical structures.

Is the brain much more complex than a pump? Sure. Does that mean that any hypothesis is anywhere near the purely materialist one? No. And even weird quantum effects, though there's no strong evidence for them, still fall under the umbrella of materialism.

Comment author: RichardKennaway 26 June 2014 08:14:03PM 3 points [-]

Just to be clear, I'm not arguing against materialism, just pointing out that we have no idea how it works.

"does this purported explanation predict the absence of consciousness in the cerebellum and motor control?"

Comparative neuroscience between species or patients with certain types of brain damage really does give us a concrete idea of how "more complex" and "higher-order" cognition, at the very least part of the puzzle of consciousness, correlates with the presence of certain types of anatomical structures.

A catalogue of brain regions that do correspond to conscious experience and those that do not does not amount to an explanation of how those that do, do, and those that don't, don't.

Comment author: Punoxysm 26 June 2014 08:41:18PM 0 points [-]

A catalogue of brain regions that do correspond to conscious experience and those that do not does not amount to an explanation of how those that do, do, and those that don't, don't.

Not just a catalogue; an understanding of their anatomical differences at the macroscopic and microscopic level, detailed studies of their electrical activities, and soon enough a neuron-level connectome to complement ever-more-fine-grained monitoring of electrical activity. This would provide the means to match more and more experiences to specific neuronal activity (or large complex - but still quantifiable - patterns of neuronal activity), including activities like deep introspection, meditation and creative work.

In the more distant future, a brain simulation that behaves like a person would be very strong evidence of the materialist view. If the only Chinese Room or Philosophical Zombie objections remain, then I'd consider the question of consciousness solved or at least dissolved.

Comment author: ChristianKl 27 June 2014 04:28:09PM 1 point [-]

The evidence of the materialist view seems very strong to me; in particular, pretty much all of neuroscience bears it out; as you note there is:

the remarkable correspondence observed between physical brain phenomena and mental phenomena

You can make the same argument about radios or other devices that are relays for information. Without understanding how a radio works it's really hard to know that the content that the radio plays isn't an emergent phenomena.

Comment author: RichardKennaway 28 June 2014 12:24:29AM 4 points [-]

You can make the same argument about radios or other devices that are relays for information. Without understanding how a radio works it's really hard to know that the content that the radio plays isn't an emergent phenomena.

When a radio is damaged, all that is affected is the clarity or the presence of the material that is being transmitted. There is no damage to a radio that would make spoken word material sound just the same, except that all nouns naming animals were garbled. The material coming over the radio has aspects to it that malfunctions of the radio may obscure but never manipulate.

In contrast, the correspondences found between brain damage and phenomena of consciousness suggest a very broad connection of the brain to the hypothetical soul, a connection so broad that there seems little work left for the soul to do. "Brain as the antenna of the soul" is at present looking very like "God in the gaps".

Comment author: ChristianKl 28 June 2014 06:01:00PM 0 points [-]

I wouldn't see the names of animals as phenomena of consciousness. I would rather label them mental phenomena.

Plenty of people meditate in an effort to raise their level of consciousness and transcend the mind that goes around and labels and judges.

Comment author: TheAncientGeek 28 June 2014 12:32:45PM 0 points [-]

The comment has about emergentism, but your reply was about soul theory, which is quite different.

Strong emergentism is notoriously badly defined, but a typical version might include:

1 mental phenomena are irreducible, or have an irreducible component

2 mental phenomena are not predictable from neural activity by standard physical laws

3 mental phenomena phenomena are related to neural activity by special psychophysical laws

Note that 3 guarantees a close relationship between neural activity and consciousness.

Comment author: Punoxysm 27 June 2014 04:51:14PM 0 points [-]

You can make the same argument about radios or other devices that are relays for information. Without understanding how a radio works it's really hard to know that the content that the radio plays isn't an emergent phenomena.

I do not get this analogy. We know quite a bit about how the brain work at the neuronal level. A rigorous program of research exists that should gives us an understanding of increasingly coarse modules over time. Simulating a brain in silico is an eventually-achievable method to extensively test almost any hypothesis we could develop.

When I say consciousness is emergent I'm saying that I believe neuroscience will eventually be able to pinpoint the mechanisms of almost any type of higher-order thought, and come up with as-useful-as-is-possible a definition of things like qualia and self-awareness, and that these mechanisms will all relate to complex, dynamic neuronal and chemical behavior in the brain.

An example non-emergent explanation of consciousness would be "the brain is an antenna for ethereal souls", which would be hard to test but would have to be given consideration if the program I outline above completely fails to fully account for thoughts and experiences above a certain complexity.

Comment author: ChristianKl 27 June 2014 05:10:40PM 2 points [-]

Simulating a brain in silico is an eventually-achievable method to extensively test almost any hypothesis we could develop.

You just assume that's true. Before we actually do run that simulation in practice we don't know whether that's true.

When I say consciousness is emergent I'm saying that I believe neuroscience will eventually be able to pinpoint the mechanisms of almost any type of higher-order thought

Yes, and other people do believe in souls and God. We don't have evidence that proves either hypothesis.

An example non-emergent explanation of consciousness would be "the brain is an antenna for ethereal souls", which would be hard to test but would have to be given consideration if the program I outline above completely fails to fully account for thoughts and experiences above a certain complexity.

Yes, and the brain as an antenna hypothesis is basically what parapsychologists like Dean Radin advocate these days. We don't have yet evidence to prove it wrong.

Saying we could in theory run experiments that if those experiments turn out a certain way would prove our theory right is not the same thing as arguing that there evidence for your theory.

Science lives from distinguishing what you know and what you don't know.

Comment author: TheAncientGeek 28 June 2014 12:38:03PM 0 points [-]

You're using "emergent" to mean "reductiomistic", which is pretty much the opposite

Comment author: TheAncientGeek 27 June 2014 12:03:24PM 1 point [-]

Cognition is the Easy Problem.

Comment author: Punoxysm 27 June 2014 05:11:49PM 0 points [-]

If I made an in silico simulation of a human brain that could convincingly match human cognition, what would stop you from believing that it was also conscious?

Comment author: TheAncientGeek 27 June 2014 05:19:12PM *  1 point [-]

I wouldn't say it wasn't and I wouldn't say it was.

A functional duplicate of a qualiaphile would report qualia, even if it didn't have them, and a functional duplicate of a qualiaphobe would deny it had qualia even if it did.

Eta:

In other words, everything is predictable from who's brain is emulated. We need some other test.

Comment author: pragmatist 25 June 2014 10:50:19AM *  13 points [-]

The problem was, he eliminated the impossible, but left open a huge vast space of possible hypotheses that he didn't know about (but which we do): the most common of these is the computational theory of mind and consciousness, which says that we are made of cognitive algorithms.

Wait, are you suggesting that the reviewer (Jerry Fodor) is unaware of the computational theory of mind? Unlikely, given that he is one of its progenitors. From the wikipedia article on the computational theory:

The theory was proposed in its modern form by Hilary Putnam in 1961, and developed by the MIT philosopher and cognitive scientist (and Putnam's PhD student) Jerry Fodor in the 1960s, 1970s and 1980s

Comment author: TheAncientGeek 25 June 2014 03:33:32PM *  10 points [-]

Yep. Eli failed to consider the hypothesis that the philosophers who reject CTM do so because they have objections to it, rather than because they have never heard if it.

Comment author: DanielLC 24 June 2014 11:19:01PM 8 points [-]

I don't think you should so much notice that you're ignorant as assume you're ignorant. You always assign some probability to "something I haven't thought of". You do need to notice when you're making an implicit assumption that you've thought of everything. And you need to figure out how much probability to assign to things you haven't thought of.

I don't think there's any good theoretical way to figure out how likely it is that the answer is something that you haven't thought of. You just have to practice. I'm not sure how you can practice.

Comment author: drethelin 24 June 2014 09:03:08PM 4 points [-]

Once you come to a conclusion, try to apply it to make a prediction or even just see whether it could've been used to predict some previously known things. If not, you're still ignorant.

Of course this is hard in a field where almost none of your interlocutors consider making predictions to be a useful thing

Comment author: shminux 24 June 2014 11:21:14PM 8 points [-]

If you can notice when you're confused, how do you notice when you're ignorant?

Have you noticed when YOU are confused?

1) The universe is a unified, consistent whole. Good!

"Good"? What does the statement even mean? What would be an alternative? non-unified whole? unified parts? non-unified bits and pieces? How would you tell?

2) The universe contains the experience/existence of consciousness. Easily observable.

Depends on your definition of consciousness. Is it one of the qualia? An outcome on the mirror test? Something else? If it's a quale, do qualia exist in the same way physical things do? The statement above is meaningless without specifying the details.

3) If consciousness exists, something in the universe must cause or give rise to consciousness. Good reasoning!

Eh, bad reasoning. Depends on the definition of "cause", which is more logic than physics. Causality in physics is merely a property of a certain set of the equations of motion, which is probably not what is meant in the above quote.

4) "Emergence" is a non-explanation, so that can't be it. Good!

Bad. Emergence "as a process whereby larger entities, patterns, and regularities arise through interactions among smaller or simpler entities that themselves do not exhibit such properties" does not necessarily imply irreducibility, so even if we can reduce humans to quarks, humans can have properties which quarks don't. Anyway, I grant this one if it means "everything is reducible" and nothing more. Of course, the reduced constituents are not required to have all the properties of the whole.

5) Therefore, whatever stuff the unified universe is made of must be giving rise to consciousness in a nonemergent way.

Presumably this means "in a non-dualist way", i.e. a complex enough optimizer is not granted consciousness by some irreducible entity.

6) Therefore, the stuff must be innately "mindy".

To argue against it, you don't need "the computational theory of mind and consciousness", just note that, say, atoms are not innately solid or liquid, so some properties of complex systems are meaningless when applied to its constituents.

Of course, maybe I am the one who is confused and not noticing it...

Comment author: TheAncientGeek 25 June 2014 03:21:11PM 0 points [-]

We can explain how to build solidity and liquidity out of atoms.

If you want to argue for CTM, it would help explain how Red and Painful and Itchy are built out of bits and bytes.

Comment author: fubarobfusco 25 June 2014 09:11:39PM 2 points [-]

That seems to be within the domain of neuroscience (for what physiologically is going on in "itchy" and how "itchy" sensations are distinguished in the nervous system from "painful" or "red" ones) and possibly neurolinguistics (for how we acquire the category "itchy" and learn to refer to it when we describe our sensations to ourselves or others).

There might be a sideline of some other branch of psychology for why people get so damn defensive about the idea that their ego is a Real Thing that Really Has Real Experiences as opposed to a cogno-intellectual process running on a symbiotic ape brain.

Comment author: TheAncientGeek 26 June 2014 10:34:33AM *  0 points [-]

Neuroscience can match off known neural activity to known sensations on aposteriori evidence, but it cannot provide a principled and predictive explanation of why a particular neural event should feel a particular way.

How we verbally categorise phenomenal feels is also not the hard problem.

The ego is also not the hard problem. You might want to say that egos don't exist, but it seems to us thatvwe have them, or we feel we have them. That is a dissolution of the ego, not of qualia.

Comment author: TheAncientGeek 25 June 2014 05:48:32PM 0 points [-]

"Emergence" means different things to different people. Yep, this is an argument about definitions...

Comment author: [deleted] 25 June 2014 06:31:09AM 3 points [-]

I sort of just always assume that my current hypotheses are a waypoint on the road to greater understanding. I'm not confident in the things I don't know,as there's always the possibility of unknown unknowns.

Comment author: [deleted] 24 June 2014 11:04:21PM *  3 points [-]

Of course panpsychism is bunk.

Is it?

(but what jimrandomh says is still correct)

Comment author: Slider 25 June 2014 07:18:24AM 1 point [-]

I would like an explicitation for the reasons why it seems to be false. In particular I fail to see how computational account would be against it. You can compute with levers, transistors and a large array of different things. And actually there is no things you can't compute with. Thus you can compute with anything. So anything is "computy" which is another way of saying it's "mindy". But ofcourse that everything can be used in computing doesn't mean the computations are of equal value/complexity. Thus a genuine difference between rocks and people. But that it still allows that there is "what it feels to be a rock inside". Granted it probably isn't anything grand or interesting. However it would be really weird if there was a clear division where "feeling" began and "cold" motion stopped.

I would like to note that an abstraction where we disregard "feelings" and focus on technical public impact with the environment can lead to a "cold" conceptation of the world. However when used as a worldview (that is outside of tracking positions and mechanics) it is quite erroneus. In an extreme extrapolation you are just a robot and should be "cold". This kind of non-psychisim has the loudest counterevidence there is available - you do feel (crossing fingers that you are not a zombie). Whether the psychisims extends beyond you is an open question. If you can get around the problem of other minds that is existence of psychisims like you why would you assume that there are only feelers like you? Ie there is an analog problem of mindness of other, given that you could not directly experience the feelings of rocks why would you assume they don't have them?

If the answer is purely because you are used to abstract that facet of them away because of practical needs that doesn't answer the theorethical question. It is the same that a psychopath would treat fully fledged people - to him it doesn't matter what people are on the inside only what he can do with them. In that way the "cold" and "feely" way of relating to your surroundings don't disagree what the mechanics are. But why insist that the "feely" way is false or inferior?

Comment author: TheAncientGeek 25 June 2014 03:30:50PM *  1 point [-]

Anything is potentially computy.which is analogous to panPROTOpsychism.

Comment author: Slider 25 June 2014 08:40:03PM 0 points [-]

If it isn't computing for me it isn't computing?

Comment author: JQuinton 24 June 2014 09:43:38PM *  3 points [-]

You should probably be skeptical when presented with binary hypotheses (either by someone else or by default). Say in this example that H1 is "emergence". The alternative for H1 isn't "mind-stuff" but simply ~H1. This includes the possibility of "mind-stuff" but also any alternatives to both emergence and mindstuff. Maybe a good rule to follow would be to assume and account for your ignorance from the beginning instead of trying to notice it.

One way to make this explicit might be to always have at least three hypotheses: One in favor, one for an alternative, and a catchall for ignorance; the catchall reflecting the little that you know about the subject. The less you know about the subject, the larger your bucket.

Maybe in this case, your ignorance allocation (i.e. prior probability for ignorance) is 50%. This would leave 50% to share between the emergence hypothesis and the mindstuff hypothesis. I personally think that the mindstuff hypothesis is pretty close to zero, so the remainder would be in favor of emergence, even if it's wrong. In this case, "emergence" is asserted to be a non-explanation, but this could probably be demonstrated in some way, like sharing likelihood ratios; that might even show that "mindstuff" is an equally vapid explanation for consciousness.

Comment author: The_Duck 26 June 2014 12:16:55AM *  2 points [-]

What frightens me is: what if I'm presented with some similar argument, and I can't spot the flaw?

Having recognized this danger, you should probably be more skeptical of verbal arguments.

Comment author: ChristianKl 25 June 2014 05:30:14AM 2 points [-]

Rejecting hypothesis can only bring you to a state where you don't know what's going on. It's not constructive in a way where it bring you to the conclusion that one of the alternatives is true.

It would probably make sense to say: I don't know over a wider array of questions.

Comment author: [deleted] 24 June 2014 08:24:58PM 3 points [-]

If you can notice when you're confused, how do you notice when you're ignorant?

I think one tricky thing about this question is there are cases where I am ALWAYS ignorant, and the question to ask instead is, is my ignorance relevant? I mean, I tried to give a short example of this with a simple question, below, but ironically, I was ignorant about how many different ways you could be ignorant about something until I started trying to count them, and I'm likely still ignorant about it now.


For instance, take the question: What is my spouse's hair color?

Presumably, a good deal of people reading this are somewhat ignorant about that.

On the other hand, they probably aren't as ignorant as a blind visiting interstellar Alien, Samplix, who understands English but nothing about color, although Samplix has also been given a explanation of hexadecimal color chart and has decided to guess the RGB values of my spouse's hair is #66FF00.

But you could also have another blind alien, Sampliy, who wasn't given even given a color chart, doesn't understand what words are colors and what words aren't, and so goes to roughly the middle of a computer English/Interstellar dictionary and guesses "Mouse?"

Or another visiting Alien, Sampliz, who doesn't understand English and so responds with '%@%$^!'

And even if you know my spouse has black hair, you could get more specific than that:

For instance, a Hair analyzing computer might be able to determine that my spouse has approximately 300,000 hairs, and 99% of them happen to be the Hexadecimal shade #001010, but another, more specific Hair Analyzing computer, might say that my spouse has 314,453 hairs, and 296,415 of them are Hexadecimal shade #001010. and 10,844 of them are Hexadecimal shade #001011, and...

And even if you were standing with that report from the second computer saying "Okay, it finished it's report, and I have this printout from an hour ago, so I am DEFINITELY not ignorant about your spouse's hair color."

Well, what if I told you my spouse just came back from a Hair salon?


The above list isn't exhaustive, but I think it establishes the general point. My spouse's hair color seems like the kind of question which someone could be ignorant about in less ways than something as confusing as consciousness, and yet... even spousal hair color is complicated.

Comment author: DanArmak 24 June 2014 08:41:26PM 5 points [-]

I think there's a relevant difference here between being ignorant of actual data that you are aware exists (e.g. the color of hair), and being ignorant of the existence of alternative theories or models (e.g. possible alternative meanings of the word "color").

Comment author: [deleted] 25 June 2014 02:35:47PM 3 points [-]

That seemed to make sense to me at first, but I'm having a hard time actually finding a good dividing line to show the relevant difference, particularly since what seems like it can be model ignorance for one question can be data ignorance for another question.

For instance, here are possible statements about being ignorant about the question. "What is my spouse's hair color?"

1: "I don't know your spouse's hair color."

2: "I don't know if your spouse has hair."

In this context, 1 seems like data ignorance, and 2 would seem like model ignorance.

But given a different question "Does my spouse have hair?"

2 is data ignorance, and 1 doesn't seem to be a well phrased response.

And there appear to be multiple levels of this as well: For instance, someone might not know whether or not I have a spouse.

What is the best way to handle this? Is it to simply try to keep track of the number of assumptions you are making at any given time? That seems like it might help, since in general, models are defined by certain assumptions.

Comment author: David_Gerard 25 June 2014 12:55:15PM *  1 point [-]

I've sometimes found it productive to explicitly add "the hypothesis that hasn't occurred to me" to the list. To remind me there is (at least) one.

Comment author: buybuydandavis 26 June 2014 01:59:36AM *  1 point [-]

If you can notice when you're confused, how do you notice when you're ignorant?

You don't need to notice that you're ignorant if you already know that you are.

One of the structural commitments of Korzybski (of the The Map is not the Territory fame) is that abstractions always leave out some facts. My concepts of a thing is not the thing itself - the map is not the territory. That consciousness of abstraction entails a consciousness of ignorance.

When he had eliminated the impossible, whatever remained, however low its prior, must be true.

Eliminated by his calculations, with his priors, with his abstractions. What's the probability that those are wrong? What's the probability that he hadn't taken into account everything. And then, what's the chance that he hadn't been thorough enough in his enumeration of "whatever remained"?

Jaynes has a nice example of rejecting "whatever remained", by putting a something else theory into the analysis, and assigning some small probability to it.

Also, like Korzybski, Jaynes encourages a consciousness of abstraction by conditioning all probabilities on background knowledge I, as in P(X | a1,a2,......, I). There's my background knowledge I, staring back at me. What if it's incorrect?

So there are two main failures in these proof by contradiction scenarios. The first is to fail to include a valid alternative. The second is that your I, your model and assumptions, suck. They are wrong, or worse, not even wrong.

Comment author: TheAncientGeek 25 June 2014 02:25:07PM *  1 point [-]

Philosophers aren't actually ignorant of computational theories of mind. Some of them reject CTM , because it seems have no more ability address qualia/hard problem issues than materialism ( in fact, one can robustly argue that compuationalism doesn't add anything to materialism in terms of powers or properties, and that CTM is therefore less able to explain qualia than straight materialism).

So, before LW starts shouting about the stupidity of philosophers, LW needs to say something about the Hard Problem.

At the moment there isn't even a consensus.

Eta: having re-read Fodors review, I notice there are frequent references to the hard problem issues, qualia than, conscious experience, etc. I am not sure whether Eli thinks they're unimportant, or thinks the CTM explains them , or what.

panpsychism is bunk.

Panpsychism is the least defensible of a set of related concepts.

Comment author: Sophronius 05 July 2014 10:34:14PM 0 points [-]

If you can notice when you're confused, how do you notice when you're ignorant?

I actually have a specific feeling associated with everything clicking together. If I don't have that feeling, my model does not perfectly explain everything which means there's something I'm not considering. In that case, I go looking for alternative hypotheses.