Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

What is Evidence?

55 Post author: Eliezer_Yudkowsky 22 September 2007 06:43AM

"The sentence 'snow is white' is true if and only if snow is white."
        —Alfred Tarski
"To say of what is, that it is, or of what is not, that it is not, is true."
        —Aristotle, Metaphysics IV

If these two quotes don't seem like a sufficient definition of "truth", read this.  Today I'm going to talk about "evidence".  (I also intend to discuss beliefs-of-fact, not emotions or morality, as distinguished here.)

Walking along the street, your shoelaces come untied.  Shortly thereafter, for some odd reason, you start believing your shoelaces are untied.  Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace.  There is a sequence of events, a chain of cause and effect, within the world and your brain, by which you end up believing what you believe.  The final outcome of the process is a state of mind which mirrors the state of your actual shoelaces.

What is evidence?  It is an event entangled, by links of cause and effect, with whatever you want to know about.  If the target of your inquiry is your shoelaces, for example, then the light entering your pupils is evidence entangled with your shoelaces.  This should not be confused with the technical sense of "entanglement" used in physics—here I'm just talking about "entanglement" in the sense of two things that end up in correlated states because of the links of cause and effect between them.

Not every influence creates the kind of "entanglement" required for evidence.  It's no help to have a machine that beeps when you enter winning lottery numbers, if the machine also beeps when you enter losing lottery numbers.  The light reflected from your shoes would not be useful evidence about your shoelaces, if the photons ended up in the same physical state whether your shoelaces were tied or untied.

To say it abstractly:  For an event to be evidence about a target of inquiry, it has to happen differently in a way that's entangled with the different possible states of the target.  (To say it technically:  There has to be Shannon mutual information between the evidential event and the target of inquiry, relative to your current state of uncertainty about both of them.)

Entanglement can be contagious when processed correctly, which is why you need eyes and a brain.  If photons reflect off your shoelaces and hit a rock, the rock won't change much.  The rock won't reflect the shoelaces in any helpful way; it won't be detectably different depending on whether your shoelaces were tied or untied.  This is why rocks are not useful witnesses in court.  A photographic film will contract shoelace-entanglement from the incoming photons, so that the photo can itself act as evidence.  If your eyes and brain work correctly, you will become tangled up with your own shoelaces.

This is why rationalists put such a heavy premium on the paradoxical-seeming claim that a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise.  If your retina ended up in the same state regardless of what light entered it, you would be blind.  Some belief systems, in a rather obvious trick to reinforce themselves, say that certain beliefs are only really worthwhile if you believe them unconditionally— no matter what you see, no matter what you think.  Your brain is supposed to end up in the same state regardless.  Hence the phrase, "blind faith".  If what you believe doesn't depend on what you see, you've been blinded as effectively as by poking out your eyeballs.

If your eyes and brain work correctly, your beliefs will end up entangled with the facts.  Rational thought produces beliefs which are themselves evidence.

If your tongue speaks truly, your rational beliefs, which are themselves evidence, can act as evidence for someone else.  Entanglement can be transmitted through chains of cause and effect—and if you speak, and another hears, that too is cause and effect.  When you say "My shoelaces are untied" over a cellphone, you're sharing your entanglement with your shoelaces with a friend.

Therefore rational beliefs are contagious, among honest folk who believe each other to be honest.  And it's why a claim that your beliefs are not contagious—that you believe for private reasons which are not transmissible—is so suspicious.  If your beliefs are entangled with reality, they should be contagious among honest folk.

If your model of reality suggests that the outputs of your thought processes should not be contagious to others, then your model says that your beliefs are not themselves evidence, meaning they are not entangled with reality.  You should apply a reflective correction, and stop believing.

Indeed, if you feel, on a gut level, what this all means, you will automatically stop believing.  Because "my belief is not entangled with reality" means "my belief is not accurate".  As soon as you stop believing "'snow is white' is true", you should (automatically!) stop believing "snow is white", or something is very wrong.

So go ahead and explain why the kind of thought processes you use systematically produce beliefs that mirror reality.  Explain why you think you're rational.  Why you think that, using thought processes like the ones you use, minds will end up believing "snow is white" if and only if snow is white.  If you don't believe that the outputs of your thought processes are entangled with reality, why do you believe the outputs of your thought processes?  It's the same thing, or it should be.

 

Part of the sequence Map and Territory

Next post: "How Much Evidence Does It Take?"

Previous post: "Why truth? And..."

Comments (42)

Sort By: Old
Comment author: Gray_Area 22 September 2007 09:15:37AM 1 point [-]

Why not just say e is evidence for X if P(X) is not equal to P(X|e)?

Incidentally, I don't really see the difference between probabilistic dependence (as above) and entanglement. Entanglement is dependence in the quantum setting.

Comment author: Larks 06 October 2010 07:14:49AM 6 points [-]

Trivially, because P(X|e) could be less than P(X)

Comment author: Will_Sawin 09 July 2011 08:57:56PM 3 points [-]

Quantum wave amplitudes behave in some ways like probabilities and in other ways unlike probabilities. Because of this, some concepts have analogues, while others don't.

But no concepts are exactly equivalent. For example, evidence isn't integrally linked to complex numbers, while entanglement is.

Comment author: ec429 14 September 2011 06:09:45PM 0 points [-]

Nonetheless, it is instructive (imho) to consider how (assigned) probability is a property of the observer, and not an inherent property of the system. If a qubit is (|0> + |1>)/sqrt(2), and I measure it and observe 0, then I'm entangled with it so relative to me it's now |0>. But what's really happened is that I became (|observed 0> + |observed 1>)/sqrt(2), or rather, that the whole system became (|0,observed 0> + |1,observed 1>)/sqrt(2). This is closely analogous to the Law of Conservation of Probability; if you take Expectations conditional on the observation, then take Expectation of the whole thing, you get the original expectation back. This is because observing the system doesn't change the system, it just changes you. This is obvious in Bayesian probability in the classical-mechanics world; the only reason it doesn't seem obvious in the quantum realm is that we've been told over and over that "observing a quantum system changes it".

Quite honestly, I don't see how a Bayesian can possibly be a Copenhagenist. Quantum probability is Bayesian probability, because quantum entanglement is just the territory updating itself on an observation, in the same way that Bayesian 'evidence entanglement' is updating one's map on an observation.

Comment author: Will_Sawin 16 September 2011 03:49:48AM 2 points [-]

Classical probability preserves amplitude, quantum preserves |amplitude|^2.

They're different things, and they could, potentially, be even more different.

Comment author: ec429 16 September 2011 03:58:19AM 1 point [-]

Um, but isn't that just a convention? Why should we treat the "amplitude" of a classical probability as being the probability?

Does the problem have something to do with the extra directionality quantum probabilities have by virtue of the amplitude being in C? (so that |0> and (-1*|0>) can cancel each other out)

Comment author: Will_Sawin 21 September 2011 04:46:46AM 1 point [-]

Classical probability transformations preserve amplitude and quantum ones preserve |amplitude|^2. That's not a whole reason, but it's part of one.

Yes, that's part of the difference. Quantum transformations are linear in a two-dimensional wave amplitude but preserve a 1-dimensional |amplitude|^2. Classical transformations are linear in one-dimensional probability and preserve 1-dimensional probability.

Comment author: ec429 21 September 2011 12:46:19PM 0 points [-]

Ah, I get it now, thanks!

(Copenhagen is still wrong though ;)

Comment author: potato 18 July 2011 11:16:56PM 4 points [-]

"This should not be confused with the technical sense of "entanglement" used in physics - here I'm just talking about "entanglement" in the sense of two things that end up in correlated states because of the links of cause and effect between them."

That's literally in the third paragraph.

I think you mean, if P(x)<P(x|e) then e is evidence for x. That is a good definition for evidence, but it doesn't function on the same level as Yudkowsky's above. Yudkowsky is explaining not just what function evidence has in truth finding, he is also explaining how evidence is built into a physical system, e.g., camera, human, or other entanglement device. The Bayesian def of evidence you gave tells us what evidence is, but it doesn't tell us how evidence works, which Yudkowsky's does.

Comment author: alex_zag_al 16 September 2012 01:44:18AM *  0 points [-]

That definition does not always coincide with what is described in the article; something can be evidence even if P(X|e) = P(X).

Imagine that two cards from a shuffled deck are placed face-down on a table, one on the left and one on the right. Omega has promised to put a monument on the moon iff they are the same color.

Omega looks at the left card, and then the right, and then disappears in a puff of smoke.

What he does when he's out of sight is entangled with the identity of the card on the right. Change the card to one of a different color and, all else being equal, Omega's action changes.

But, if you flip over the card on the right and see that it's red, that doesn't change the degree to which you expect to see the monument when you look through your telescope. P(monument|right card is red) = P(monument) = 25/51

It does change your conditional beliefs, though, such as what the world would be like if the left card turned out to also be red: P(monument|left is red & right is red) > P(monument|left is red)

Comment author: moshez 31 December 2012 11:40:50AM 0 points [-]

Of course e can be evidence even if P(X|e)=P(X) -- it just cannot be evidence for X. It can be evidence for Y if P(Y|e)>P(Y), and this is exactly the case you describe. If Y is "there is a monument and left is red or there is no monument and left is black", then e is (infinite, if Omega is truthful with probability 1) evidence for Y, even though it is 0 evidence for X.

Similarly, you watching your shoelace untied is zero evidence for my shoelaces...

Comment author: James_Bach 22 September 2007 09:47:29AM 2 points [-]

Hi Eliezer,

I like the word entanglement, because it's a messy concept. Reality, whatever else it might be, is messy. That's why statements like the preceding sentence can't ever be completely true. The messiness makes it hard to talk about anything real in any absolutely definitive sort of way.

I can be definitive about artificial constructs in an artificial world, yes. Hence, mathematics. But when you or I try to capture the real world with that comforting clarity, we are doomed. Well, mostly doomed. 85.27% doomed, plus or minus an unknown set of unknowns.

That's the problem I have with your otherwise (as usual) thought provoking post: YES, our perceptions are entangled with the state of the world and that often influences our beliefs which then may entangle our utterances and therefore eventually entangle other people's beliefs. BUT what is the nature of that entanglement? You can't know for sure. What specifically are the beliefs that you intend to refer to? You can't know for sure.

The factor I expected to see in your essay, but did not, is *interpretation based on mental models*. There are many models I might have in my mind that could influence what counts as evidence.

You wrote: "For an event to be evidence about a target of inquiry, it has to happen differently in a way that's entangled with the different possible states of the target."

If we put the missing material about interpretation in there this might read:

"For me to consider an event to be evidence about a target of inquiry, I must first possess or construct a model of that event and that target and also a model of the world that contains and relates the event and target with each other. Then, for the event to be evidence CORROBORATING a particular theory about the target, I must imagine plausible alternative events that would that would CONTRADICT that theory."

Unfortunately, our models can be wrong, and *are* often wrong in interesting ways. So, we can satisfy your version of the statement, or my version, and still be counting as evidence things that may be no evidence at all. Example: "I was about to go for a car ride and a black cat crossed my path, which I interpret as a portent of evil, so I went back into my house. The black cat was evidence of evil in that particular situation because a black cat crossing my path is a rare event; it is possible for the cat not to have crossed my path; and in my culture, which is the collective product of successful experience staying alive and procreating, it is considered a portent of evil for a black cat to cross one's path. Had a black cat not crossed my path, I would consider that evidence (weak evidence) that I was not about to experience misfortune."

Comment author: g 22 September 2007 08:11:06PM 4 points [-]

Seems to me that you can in principle rationally believe (1) that your beliefs are entangled with reality but (2) that you don't have any more effective way of persuading others than to say "see, I believe this". Specifically, imagine that every now and then you find yourself acquiring a belief in a particular, weird, internal way (say, you have the strong impression that God speaks to you, accompanied by a mysterious smell of apricots), and that several times this has happened and you've checked out the belief and it's turned out to be true. (And you've never checked it and found it to be false, and the instances you checked were surprising, etc.)

I think you'd be entitled, in this situation, to believe that your weirdly acquired beliefs are entangled with reality; but I can't see any way you could be very convincing to someone who didn't know the history (barring further such episodes in the future, of which there is no guarantee); and even in the best-possible case where whenever this thing happens to you you immediately tell someone else of the belief you've acquired and get them to check it, it could be very difficult for them to rule out hoaxing well enough to make them trust you.

Now, the standard case of incommunicably grounded beliefs -- which I suspect Eliezer had in mind here -- is of some sorts of religious belief; and they share at least some features with my semi-silly example. They generally lack the really important one (namely, repeated testing), and that's a big strike against them; but the big strike is the poor quality of the evidence, not its incommunicability as such.

So yes, incommunicability is suspicious, and a warning sign, but I think Eliezer goes too far when he says that a model that says your beliefs aren't evidence for others is ipso facto saying that you don't yourself have reason to believe. Unless he really means literally absolutely no evidence at all for others, but I don't think anyone really believes *that*.

Comment author: DanielLC 05 September 2011 07:19:34AM 2 points [-]

You can tell them that your impressions have previously always been correct and surprising. To the extent that they trust you, the evidence will be just as good for them as it was for you.

Comment author: gjm 05 September 2011 04:48:46PM 1 point [-]

The extent to which they trust you may not be very great, especially given that what you're telling them is that sometimes God speaks to you with an aura of apricots and reveals surprising but mundane truths. In any case, telling them this doesn't make your evidence any less incommunicable, except in so far as it makes all evidence communicable.

(Note: old "g" = newer "gjm".)

Comment author: DanielLC 05 September 2011 11:28:32PM 0 points [-]

In this case, they'll trust you less than if you told them that your shoelaces were untied, but it's not fundamentally different. Your shoelaces being untied is only communicable in the sense that you can tell someone, unless you count telling them to look at your shoes, but that doesn't seem to be what this is talking about.

Unless I misunderstood Eliezer, he seemed to be saying that all evidence is communicable in exactly this way.

Comment author: Cihan_Baran 22 September 2007 11:09:13PM 1 point [-]

I don't know if it is just semantics but it seems to me that you are conflating evidence and our perception of that evidence, since you write:

"What is evidence? It is an event entangled, by links of cause and effect.. If the target of your inquiry is your shoelaces, for example, then the light entering your pupils is evidence entangled with your shoelaces."(Emphasis mine)(

Take the following thought experiment. Suppose Alan has untied shoelaces that he can see. Suppose that also Alan's shoelaces produce a barely audible sound when they are untied and suppose that Barbara can and does hear this sound, while Alan can't and doesn't.

Now if I interpret you correctly, your definition of evidence amounts to saying that Barbara and Alan have different evidence with regards to Alan's untied shoelaces. However, it seems more intuitive to say that there is the a single state of things, Alan's untied shoelaces, that constitutes the only evidence that's perceived differently by Barbara and Alan.

You also think that evidence is a type of event - of course, this would be true if evidence really was someone's perception of some state of affairs that led them to form true beliefs. But I believe that there are many types of evidence that simply are not events. What about mathematical evidence for some belief? Godel's incompleteness theorem is conclusive evidence for the fact that you can't derive all the true theorems of mathematics from a formal system. (Please don't boil me too much if I am like not totally correct.) Nevertheless, that theorem is not an event in time - it doesn't cause anything. Metaphorically, we might say a certain mathematical theorem might "cause" another one - or one theorem might be the immediate "consequence" of the other - but mathematical entailment relations are different from natural causation and all this talk is just metaphorical.

Lastly, you write that:

"Some belief systems, in a rather obvious trick to reinforce themselves, say that certain beliefs are only really worthwhile if you believe them unconditionally - no matter what you see, no matter what you think. Your brain is supposed to end up in the same state regardless. Hence the phrase, "blind faith". If what you believe doesn't depend on what you see, you've been blinded as effectively as by poking out your eyeballs."

However, I can think of some instances in which perhaps "blind faith" is warranted. For instance, I can not conceive of a situation that would make 2+2 = 4 false. Perhaps for that reason, my belief in 2+2=4 is unconditional.

Comment author: Ivan_Tishchenko 28 March 2010 09:07:34AM *  1 point [-]

However, I can think of some instances in which perhaps "blind faith" is warranted. For instance, I can not conceive of a situation that would make 2+2 = 4 false. Perhaps for that reason, my belief in 2+2=4 is unconditional

Yes, it is conditional. For example, I guess, if you had put two stones next to other two, then calculated and found that there is _five stones in total, that would be a proof that 2+2 not equal to 4. This is how your belief "2+2=4" could be falsified.

Comment author: Jack 28 March 2010 09:41:16AM *  3 points [-]

I know this is Eliezer's line but it still looks like nonsense to me. This experience would be evidence stones have a tendency to spontaneously appear when four stones are put next to each other.

Comment author: dylgramac 16 December 2010 06:44:01PM 1 point [-]

I have a simpler reason that the belief 2+2 = 4 is not blind. When he says he has blind faith because "I can not conceive of a situation that would make 2+2 = 4 false." it is not blind because he is trying to find an alternative rather than entirely avoiding questioning his belief.

Comment author: Doug_S. 23 September 2007 04:41:34PM 0 points [-]

Joke counterargument:

Two cups (of sugar) + two cups (of water) = 2 cups (of sugar water)

Therefore, 2 + 2 = 2. ;)

Comment author: Benevolence 08 September 2011 09:35:35AM 0 points [-]

to be very anal and nit-picky with your joke (cuz i feel like it):

You're mixing equal volumes with inconsistent densities (and thus mass) and trying to compute a final volume. Either way you'd get more than 2 cups.

Back on topic:

i have a very simple definition of evidence.

Anything that modifies my mental probabilities about certain beliefs i hold to be true or false is considered evidence by me.

Whether or not the evidence is weak, strong, or even reliable in the first place is irrelevant if we're trying to define what evidence is.

I disagree with evidence being an event. It is rather an attribute. the event is the observation of evidence. The event (the observation -hearing, seeing, smelling, whatever) is only useful for determining if the evidence (attribute) is reliable (true).

The evidence itself does not change. It is a static thing. if you see different evidence next time, that's different evidence (a different static).

I DO agree with the entanglement though. evidence is entangled with both your map and (hopefully) the territory. after all, the whole point of evidence is to modify your map to better fit the territory. The nature of its entanglement is simple though. As stated above it simply shifts your probabilities (confidences in beliefs).

First time poster, noob in rationality so have some mercy folks ;)

Comment author: Ben_Jones 20 January 2008 12:22:44PM 2 points [-]

Is there any decent literature on the extent to which the fact of knowing that my shoelaces are untied is a real property of the universe? Clearly it has measurable consequences - it will result in a predictable action taking place with a high probability. Saying 'I predict that when someone will tie his shoelaces when he sees they're undone' is based not on the shoelaces being untied, nor on the photons bouncing, but on this abstract concept of them knowing. Is there a mathematical basis for stating that the universe has measurably changed in a nonrandom way once those photons' effects are analysed? I'd love to read more on this.

Also (closely related question), I know that overall entropy would increase in the whole system, but does this entanglement represent a small local increase in order?

Comment author: beriukay 21 March 2010 12:45:21PM 6 points [-]

"belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise"

Such a great way to put it! I wish I had read this page a few years ago, when I was arguing with my dad about religion. I wasn't able to coherently put this thought, though in retrospect I believe I was trying to get there. I ended up asking a hypothetical situation about advanced aliens visiting and telling him that his beliefs were wrong, and explaining why. He disappointed me with his answer: that he would like to believe he is strong enough in his faith to ignore the aliens. This is when I realized it would be fruitless to attempt to persuade him away from religion.

Comment author: RickJS 24 May 2010 02:29:46AM -1 points [-]

“If you don't believe that the outputs of your thought processes are entangled with reality, why do you believe the outputs of your thought processes? ”

I don’t. Well not like Believe. Some few of them I will give 40 or even 60 deciBels.

But I’m clear that my brain lies to me. Even my visual processor lies. (Have you ever been looking for your keys, looked right at them, and gone on looking?)

I hold my beliefs loosely. I’m coachable. Maybe even gullible. You can get me to believe some untruth, but I’ll let go of that easily when evidence appears.

Comment author: simplicio 05 August 2010 12:24:37AM 4 points [-]

I think I would nominate this as the most important post on LessWrong. I keep referring people to it.

Comment author: Arandur 04 May 2011 03:36:45AM -3 points [-]

Great article, and it helps me explain to my friend that my faith is not, in fact, blind.

One problem: communication via the spirit from God to an individual is an epiphenomenon. So it can't be proven externally? That's one instance of a rational belief that isn't contagious, though I suppose that's why there are people who doubt the existence of epiphenomena altogether.

Comment author: lessdazed 27 July 2011 10:28:16PM *  1 point [-]

Wikipedia says:

In philosophy of mind, epiphenomenalism is the view that mental phenomena are epiphenomena in that they can be caused by physical phenomena, but cannot cause physical phenomena.

I'm not sure what to call non-physical things changing the physical world, but it seems the communication you describe, if possible, would be non-physical to physical, right?

Comment author: Arandur 27 July 2011 10:52:49PM 0 points [-]

You are, of course, correct. :3 I used the wrong term.

Comment author: lessdazed 27 July 2011 10:58:12PM 0 points [-]

Is that because there isn't a right term? I don't know it if there is.

Comment author: Arandur 28 July 2011 02:27:41AM 0 points [-]

Perhaps there ought to be. Let's invent one!

Comment author: p4wnc6 19 August 2011 12:54:14PM *  2 points [-]

Communication is a physical process. Unless you can put forward a coherent, testable model for non-physical communication, then talking about communication from a non-physical entity to a physical entity has no semantic meaning. If no experiment can be performed to distinguish two hypotheses (e.g. that there is or is not such a thing as an epiphenomenon) then that thing is irrelevant given that human minds are purely physical objects and human thought, as far as all evidence is concerned, obeys our best models of computation (Church-Turing thesis, etc.).

Epiphenomenal hypotheses are still required to pass Occam's razor. If there is a simpler explanation (e.g. purely physical) that accounts for the evidence, then intellectual integrity demands you take that view. Positing epiphenomena is no different than positing unicorns, unless you have quantifiable evidence for the phenomena and hence they would not be epiphenomenal.

Comment author: Ab3 15 December 2011 08:50:46PM 4 points [-]

Great article, I have only this one comment:

"If your beliefs are entangled with reality, they should be contagious among honest folk."

Haven't true and false beliefs both proven to be contagious among honest folk? Just as we should not use a machine that beeps for all numbers as evidence for winning lottery numbers, we should not use whether or not a belief is contagious as evidence of its truth.

Comment author: dlthomas 15 December 2011 09:09:42PM 0 points [-]

It depends on how likely the respective explanations are.

Comment author: Ab3 15 December 2011 09:31:30PM 1 point [-]

I think it depends on that, and only that, and should be completely disconnected from any social criteria such as "being contagious."

Also, Eliezer writes, "If your model of reality suggests that the outputs of your thought processes should not be contagious to others, then your model says that your beliefs are not themselves evidence, meaning they are not entangled with reality."

This seems false. Should LW thinkers take it as a problem that our methods are usually completely lost on, for example, fundamentalist scientologists? In fact, I don't think it's a stretch to claim that most people do not subscribe to LW methods, does that suggest a problem with LW methods? Do LW methods fail the test of being contagious and therefore fail the test of being reliable methods for acquiring evidence?

Comment author: thomblake 19 January 2012 06:17:28PM 3 points [-]

I don't think that Eliezer suggested using a belief's contagiousness as strong evidence of its truth. Rather, a belief's lack of contagiousness is strong evidence of its untruth.

Comment author: royf 28 May 2012 05:03:55AM *  3 points [-]

If your beliefs are entangled with reality, they should be contagious among honest folk.
[...]
If your model of reality suggests that the outputs of your thought processes should not be contagious to others, then your model says that your beliefs are not themselves evidence, meaning they are not entangled with reality.

No, correct beliefs should only be contagious among honest folk who believe each other to be rational and honest. If I make the claim that The FSM is dictating these words to me, you would probably think me lunatic or liar. But if I truly can correctly recognize when I have been Touched by His Noodly Appendage, then my beliefs are entangled with reality but, understandably, not contagious. Furthermore, it would be perfectly rational for me to believe this revelation and at the same time not to consider it evidence for others. The point is that some beliefs, certainly the more extraordinary of them, should not be contagious, except through evidence as raw and unprocessed as possible.

Also, entanglement is necessary but not sufficient for correct beliefs. The fact that my beliefs contain information about the world is not enough for them to be correct. For example, if I misread the photon pattern, I could think that my shoelaces are tied when they are not, and untied when they are tied. This still has the same amount of entanglement, the same amount of information, yet the beliefs are incorrect.

Comment author: aceofspades 27 June 2012 07:13:14PM *  -3 points [-]

I'm not sure that this terminology about entanglement and such forth actually helps understanding. Reading this post unlikely to cause me to win more bets (make better predictions).

Comment author: [deleted] 06 August 2012 08:02:48PM 1 point [-]

I'm a newcomer working through the sequences for the first time, so I apologize if this has been more fully discussed or explained elsewhere, but I've hit a sticking point here. I was in agreement up until:

Therefore rational beliefs are contagious, among honest folk who believe each other to be honest. And it's why a claim that your beliefs are not contagious—that you believe for private reasons which are not transmissible—is so suspicious. If your beliefs are entangled with reality, they should be contagious among honest folk.

This works very well for claims like 'snow is white' but not so well for abstract concepts. In order for the evidence-based belief to transmit well, the listener must have definitions of 'snow' and 'white' that are compatible enough with the speaker's definitions for the belief to fit logically into their frame of reference - their map of the territory, if you will. Take out 'snow' and 'white' and plug in some more abstract concepts there and you'll see how quickly divergence can occur.

Two people may observe the same objective evidence and use it to reach different conclusions because their frames of reference, definitions, and prior understandings differ. Therefore, the section above doesn't seem to hold true for any beliefs bar the most simplistic and concrete.

That is, of course, unless the operative word in the quoted paragraph is claim, since anyone who outright states their beliefs are intransmissible is probably engaging in self-deception at one level or another. That seems something of an overly literal interpretation of the piece, though. Am I missing something?

Comment author: Nornagest 06 August 2012 08:47:34PM *  3 points [-]

It's definitely harder to reconcile two sets of conflicting beliefs when you're dealing with abstractions -- maybe even intractable in some cases -- but I don't think it's impossible in principle. In order for an abstraction to be meaningful, it has to say something about the sensory world; that is, it has to be part of a network of beliefs grounded in sensory evidence. That has straightforward consequences when you're dealing with physical evidence for an abstraction; when dealing with abstract evidence, though, you need to reconstruct what that evidence means in terms of experience in order to fit it into a new set of conceptual priors. We do similar things all the time, although we might not realize we're doing them: knowing that several languages conflate parts of the color space that English describes with "green" and "blue", for example, might help you deal with a machine translation saying that grass is blue.

This only becomes problematic when dealing with conceptually isolated abstractions. Smells are a good example: it's next to impossible to describe a scent well enough for it to then be recognizable without prior experience of it. Similarly, descriptions of high-level meditation often include experiences which aren't easily transmissible to non-practitioners -- not because of some ill-defined privileges attached to personal gnosis, but because they're grounded in very unusual mental states.

Comment author: [deleted] 06 August 2012 09:09:01PM 0 points [-]

Thank you for your reply! It's certainly helped to clarify the matter. I wonder now if a language used in a hypothetical culture where people placed a much higher value on sense of smell or meditative states might have a far broader and more detailed vocabulary to describe them, resolving the problems with reconstructing the evidence. It's almost Sapir-Whorf - regardless of whether or not language influences thought, it certainly influences the transmission of thought.

I think on reflection that most of my other objections relate to cases where the evidence isn't in dispute but the conclusions drawn from it are (see: much of politics!) Those could, in principle, be resolved with a proper discussion of priors and a focus on the actual objective evidence as opposed to simply the parts of it that fit with one's chosen argument. That people in most cases don't (and don't want to) reconcile the beliefs and view the situation as more complex than 'cheering for the right team' is a fault in their thinking, not the principle itself.

Comment author: jetm 07 July 2013 10:14:53PM 0 points [-]

Um... "There has to be Shannon mutual information between the evidential event and the target of inquiry"?