linkhyrule5 comments on Rationality Quotes August 2013 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (733)
Bits of Shannon entropy.
That's half of the answer. In what manner does one measure the number of bits of Shannon entropy that a person has?
If you make a numerical statement of your confidence -- P(A) = X, 0 < X < 1 -- measuring the shannon entropy of that belief is a simple matter of observing the outcome and taking the binary logarithm of your prediction or the converse of it, depending on what came true. S is shannon entropy: If A then S = log2(X), If ¬A then S = log2(1 - X).
The lower the magnitude of the resulting negative real, the better you faired.
That allows a prediction/confidence/belief to be measured. How do you total a person?
Simple, under dubiously ethical and physically possible conditions, you turn their internal world model into a formal bayesian network, and for every possible physical and mathematical observation and outcome, do the above calculation. Sum, print, idle.
It's impossible in practise, but only like, four line formal definition.
How do you measure someone whose internal world model is not isomorphic to one formal Bayesian network (for example, someone who is completely certain of something)? Should it be the case that someone whose world model contains fewer possible observations has a major advantage in being closer to the truth?
Note also that a perfect Bayesian will score lower than some gamblers using this scheme. Betting everything on black does better than a fair distribution almost half the time.
I am not very certain that humans actually can have an internal belief model that isn't isomorphic to some bayesian network. Anyone who proclaims to be absolutely certain; I suspect that they are in fact not.
How do you account for people falling prey to things like the conjunction fallacy?
I don't think people just miscalculate conjunctions. Everyone will tell you that HFFHF is less probable than H, HF, or HFF even. It's when it gets long and difference is small and the strings are quite specially crafted, errors appear. And with the scenarios, a more detailed scenario looks more plausibly a product of some deliberate reasoning, plus, existence of one detailed scenario is information about existence of other detailed scenarios leading to the same outcome (and it must be made clear in the question that we are not asking about the outcome but about everything happening precisely as scenario specifies it).
On top of that, the meaning of the word "probable" in everyday context is somewhat different - a proper study should ask people to actually make bets. All around it's not clear why people make this mistake, but it is clear that it is not some fully general failure to account for conjunctions.
edit: actually, just read the wikipedia article on the conjunction fallacy. When asking about "how many people out of 100", nobody gave a wrong answer. Which immediately implies that the understanding of "probable" has been an issue, or some other cause, but not some general failure to apply conjunctions.
There have been studies that asked people to make bets. Here's an example. It makes no difference -- subjects still arrive at fallacious conclusions. That study also goes some way towards answering your concern about ambiguity in the question. The conjunction fallacy is a pretty robust phenomenon.
Poor brain design.
Honestly, I could do way better if you gave me a millenium.
You know, at some point, whoever's still alive when that becomes not-a-joke needs to actually test this.
Because I'm just curious what a human-designed human would look like.
How likely do you believe it is that there exists a human who is absolutely certain of something?
Is this a testable assertion? How do you determine whether someone is, in fact, absolutely certain?
It's not unheard of people to bet their life on some belief of theirs.
That doesn't show that they're absolutely certain; it just shows that the expected value of the payoff outweighs the chance of them dying.
The real issue with this claim is that people don't actually model everything using probabilities, nor do they actually use Bayesian belief updating. However, the closest analogue would be people who will not change their beliefs in literally any circumstances, which is clearly false. (Definitely false if you're considering, e.g. surgery or cosmic rays; almost certainly false if you only include hypotheticals like cult leaders disbanding the cult or personally attacking the individual.)
Nope. "I'm certain that X is true now" is different from "I am certain that X is true and will be true forever and ever".
I am absolutely certain today is Friday. Ask me tomorrow whether my belief has changed.
Is someone absolutely certain if the say that they cannot imagine any circumstances under which they might change their beliefs (or, alternately, can imagine only circumstances which they are absolutely certain will not happen)? It would seem to be a better definition, as it defines probability (and certainty) as a thing in the mind, rather than outside.
In this case, I would see no contradiction as declaring someone to be absolutely certain of their beliefs, though I would say (with non-absolute certainty) that they are incorrect. Someone who believes that the Earth is 6000 years old, for example, may not be swayed by any evidence short of the Christian god coming down and telling them otherwise, an event to which they may assign 0.0 probability (because they believe that it's impossible for their god to contradict himself, or something like that).
Further, I would exclude methods of changing someone's mind without using evidence (surgery or cosmic rays). I can't quite put it into words, but it seems like the fact that it isn't evidence and instead changes probabilities directly means that it doesn't so much affect beliefs as it replaces them.
Tangent: Does that work?