MagnetoHydroDynamics comments on Rationality Quotes August 2013 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (733)
If you make a numerical statement of your confidence -- P(A) = X, 0 < X < 1 -- measuring the shannon entropy of that belief is a simple matter of observing the outcome and taking the binary logarithm of your prediction or the converse of it, depending on what came true. S is shannon entropy: If A then S = log2(X), If ¬A then S = log2(1 - X).
The lower the magnitude of the resulting negative real, the better you faired.
That allows a prediction/confidence/belief to be measured. How do you total a person?
Simple, under dubiously ethical and physically possible conditions, you turn their internal world model into a formal bayesian network, and for every possible physical and mathematical observation and outcome, do the above calculation. Sum, print, idle.
It's impossible in practise, but only like, four line formal definition.
How do you measure someone whose internal world model is not isomorphic to one formal Bayesian network (for example, someone who is completely certain of something)? Should it be the case that someone whose world model contains fewer possible observations has a major advantage in being closer to the truth?
Note also that a perfect Bayesian will score lower than some gamblers using this scheme. Betting everything on black does better than a fair distribution almost half the time.
I am not very certain that humans actually can have an internal belief model that isn't isomorphic to some bayesian network. Anyone who proclaims to be absolutely certain; I suspect that they are in fact not.
How do you account for people falling prey to things like the conjunction fallacy?
I don't think people just miscalculate conjunctions. Everyone will tell you that HFFHF is less probable than H, HF, or HFF even. It's when it gets long and difference is small and the strings are quite specially crafted, errors appear. And with the scenarios, a more detailed scenario looks more plausibly a product of some deliberate reasoning, plus, existence of one detailed scenario is information about existence of other detailed scenarios leading to the same outcome (and it must be made clear in the question that we are not asking about the outcome but about everything happening precisely as scenario specifies it).
On top of that, the meaning of the word "probable" in everyday context is somewhat different - a proper study should ask people to actually make bets. All around it's not clear why people make this mistake, but it is clear that it is not some fully general failure to account for conjunctions.
edit: actually, just read the wikipedia article on the conjunction fallacy. When asking about "how many people out of 100", nobody gave a wrong answer. Which immediately implies that the understanding of "probable" has been an issue, or some other cause, but not some general failure to apply conjunctions.
There have been studies that asked people to make bets. Here's an example. It makes no difference -- subjects still arrive at fallacious conclusions. That study also goes some way towards answering your concern about ambiguity in the question. The conjunction fallacy is a pretty robust phenomenon.
I've just read the example beyond it's abstract. Typical psychology: the actual finding was that there were fewer errors with the bet (even though the expected winning was very tiny, and the sample sizes were small so the difference was only marginally significant), and also approximately half of the questions were answered correctly, and the high prevalence of "conjunction fallacy" was attained by considering at least one error over many questions.
How is it a "robust phenomenon" if it is negated by using strings of larger length difference in the head-tail example or by asking people to answer in the N out of 100 format?
I am thinking that people have to learn reasoning to answer questions correctly, including questions about probability, for which the feedback they receive from the world is fairly noisy. And consequently they learn that fairly badly, or mislearn it all-together due to how more detailed accounts are more frequently the correct ones in their "training dataset" (which consists of detailed correct accounts of actual facts and fuzzy speculations).
edit: Let's say, the notion that people are just generally not accounting for conjunction is sort of like Newtonian mechanics. In a hard science - physics - Newtonian mechanics was done for as a fundamental account of reality once conditions were found where it did not work. Didn't matter any how "robust" it was. In a soft science - psychology - an approximate notion persists in spite of this, as if it should be decided by some sort of game of tug between experiments in favour and against that notion. If we were doing physics like this, we would never have moved beyond Newtonian mechanics.
Framing the problem in terms of frequencies mitigates a number of probabilistic fallacies, not just the conjunction fallacy. It also mitigates, for instance, base rate neglect. So whatever explanation you have for the difference between the probability and frequency framings shouldn't rely on peculiarities of the conjunction fallacy case. A plausible hypothesis is that presenting frequency information simply makes algorithmic calculation of the result easier, and so subjects are no longer reliant on fallible heuristics in order to arrive at the conclusion.
The claim of the heuristics and biases program is that the conjunction fallacy is a manifestation of the representativeness heuristic. One does not need to suppose that there is a misunderstanding about the word "probability" involved (if there is, how do you account for the betting experiments?). The difference in the frequency framing is not that it makes it clear what the experimenter means by "probability", it's that the ease of algorithmic reasoning in that case reduces reliance on the representativeness heuristic. Further evidence for this is that the fallacy is also mitigated if the question is framed in terms of single-case probabilities, but with a diagram clarifying the relationship between properties in the problem. If the effect were merely due to a misunderstanding about what is meant by "probability", why would there be a mitigation of the fallacy in this case? Does the diagram somehow make it clear what the experimenter means by "probability"?
In response to your Newtonian physics example, it's simply not true that scientists abandoned Newtonian mechanics as soon as they found conditions under which it appeared not to work. Rather, they tried to find alternative explanations that preserved Newtonian mechanics, such as positing the existence of Uranus to account for discrepancies in planetary orbits. It was only once there was a better theory available that Newtonian mechanics was abandoned. Is there currently a better account of probabilistic fallacies than that offered by the heuristics and biases program? And do you think that there is anything about the conjunction fallacy research that makes it impossible to fit the effect within the framework of the heuristics and biases program?
I'm not familiar with the effect of variable string length difference, and quick Googling isn't helping. If you could direct me to some research on this, I'd appreciate it.
Poor brain design.
Honestly, I could do way better if you gave me a millenium.
You know, at some point, whoever's still alive when that becomes not-a-joke needs to actually test this.
Because I'm just curious what a human-designed human would look like.
How likely do you believe it is that there exists a human who is absolutely certain of something?
Is this a testable assertion? How do you determine whether someone is, in fact, absolutely certain?
It's not unheard of people to bet their life on some belief of theirs.
That doesn't show that they're absolutely certain; it just shows that the expected value of the payoff outweighs the chance of them dying.
The real issue with this claim is that people don't actually model everything using probabilities, nor do they actually use Bayesian belief updating. However, the closest analogue would be people who will not change their beliefs in literally any circumstances, which is clearly false. (Definitely false if you're considering, e.g. surgery or cosmic rays; almost certainly false if you only include hypotheticals like cult leaders disbanding the cult or personally attacking the individual.)
Nope. "I'm certain that X is true now" is different from "I am certain that X is true and will be true forever and ever".
I am absolutely certain today is Friday. Ask me tomorrow whether my belief has changed.
In fact, unless you're insane, you probably already believe that tomorrow will not be Friday!
(That belief is underspecified- "today" is a notion that varies independently, it doesn't point to a specific date. Today you believe that August 16th, 2013 is a Friday; tomorrow, you will presumably continue to believe that August 16th, 2013 was a Friday.)
Not exactly that but yes, there is the reference issue which makes this example less than totally convincing.
The main point still stands, though -- certainty of a belief and its time-invariance are different things.
I very much doubt that you are absolutely certain. There are a number of outlandish but not impossible worlds in which you could believe that it is Friday, yet it might not be Friday; something akin to the world of The Truman Show comes to mind.
Unless you believe that all such alternatives are impossible, in which case you may be absolutely certain, but incorrectly so.
Define "absolute certainty".
In the brain-in-the-vat scenario which is not impossible I cannot be certain of anything at all. So what?
I don't have to believe that the alternatives are impossible; I just have to be certain that the alternatives are not exemplified.
Is someone absolutely certain if the say that they cannot imagine any circumstances under which they might change their beliefs (or, alternately, can imagine only circumstances which they are absolutely certain will not happen)? It would seem to be a better definition, as it defines probability (and certainty) as a thing in the mind, rather than outside.
In this case, I would see no contradiction as declaring someone to be absolutely certain of their beliefs, though I would say (with non-absolute certainty) that they are incorrect. Someone who believes that the Earth is 6000 years old, for example, may not be swayed by any evidence short of the Christian god coming down and telling them otherwise, an event to which they may assign 0.0 probability (because they believe that it's impossible for their god to contradict himself, or something like that).
Further, I would exclude methods of changing someone's mind without using evidence (surgery or cosmic rays). I can't quite put it into words, but it seems like the fact that it isn't evidence and instead changes probabilities directly means that it doesn't so much affect beliefs as it replaces them.
Disagree. This would be a statement about their imagination, not about reality.
Also, people are not well calibrated on this sort of thing. People are especially poorly calibrated on this sort of thing in a social context, where others are considering their beliefs.
ETA: An example: While I haven't actually done this, I would expect that a significant fraction of religious people would reply to such a question by saying that they would never change their beliefs because of their absolute faith. I can't be bothered to do enough googling to find a specific interviewee about faith who then became an atheist, but I strongly suspect that some such people actually exist.
Yeah, fair enough.
You are correct. I am making my statements on the basis that probability is in the mind, and as such it is perfectly possible for someone to have a probability which is incorrect. I would distinguish between a belief which it is impossible to disprove, and one which someone believes it is impossible to disprove, and as "absolutely certain" seems to refer to a mental state, I would give it the definition of the latter.
First, fundamentalism is a matter of theology, not of intensity of faith.
Second, what would these people do if their God appeared before them and flat out told them they're wrong? :-D
I cannot imagine circumstances under which I would come to believe that the Christian God exists. All of the evidence I can imagine encountering which could push me in that direction if I found it seems even better explained by various deceptive possibilities, e.g. that I'm a simulation or I've gone insane or what have you. But I suspect that there is some sequence of experience such that if I had it I would be convinced; it's just too complicated for me to work out in advance what it would be. Which perhaps means I can imagine it in an abstract, meta sort of way, just not in a concrete way? Am I certain that the Christian God doesn't exist? I admit that I'm not certain about that (heh!), which is part of the reason I'm curious about your test.
If imagination fails, consult reality for inspiration. You could look into the conversion experiences of materialist, rationalist atheists. John C Wright, for example.
So you're effectively saying that your prior is zero and will not be budged by ANY evidence.
Hmm... smells of heresy to me... :-D
I would argue that this definition of absolute certainty is completely useless as nothing could possibly satisfy it. It results in an empty set.
If you "cannot imagine under any circumstances" your imagination is deficient.
I am not arguing that it is not an empty set. Consider it akin to the intersection of the set of natural numbers, and the set of infinities; the fact that it is the empty set is meaningful. It means that by following the rules of simple, additive arithmetic, one cannot reach infinity, and if one does reach infinity, that is a good sign of an error somewhere in the calculation.
Similarly, one should not be absolutely certain if they are updating from finite evidence. Barring omniscience (infinite evidence), one cannot become absolutely/infinitely certain.
What definition of absolute certainty would you propose?
Well, yes.
That is the point.
Nothing is absolutely certain.
Why does a deficient imagination disqualify a brain from being certain?
Tangent: Does that work?