Wiki Contributions

Comments

Thank you. English isn't my first language, so for me feedback means a lot. Especially positive :)

My point was that representative heuristic made two errors: firstly, it violates "ratio rule" (= equates P(S|c) and P(c|S)), and secondly, sometimes it replaces P(c|S) with something else. That means that the popular idea "well, just treat it as P(c|S) instead of P(S|c); if you add P(c|~S) and P(S), then everything will be OK " doesn't always work.

The main point of our disagreement seem to be this:

(1) The degree to which c is representative of S is indicated by the conditional propability p (c | S)- that is, the propability of members of S have characterestic c.

1) Think about stereotypes. They are "represent" their classes well, yet it's extremely unlikely to actually meet the Platonic Ideal of Jew.

(also, sometimes there is some incentive for members of ethnic group to hide their lineage; if so, then P(stereotypical characteristics|member of group) is extremely low, yet the degree of resemblance is very high)

(this is somewhat reminds me of the section about Planning Fallacy in my earlier post).

2) I think that it can be argued that the degree of resemblance should involve P(c|~S) in some way. If it's very low, then c is very representative of S, even if P(c|S) isn't high.


Overall, inferential distances got me this time; I'm probably going to rewrite this post. If you have some ideas about how this text could be improved, I will be glad to hear them.

Thank you for your feedback.

Yes, I'm aware of likelihood ratios (and they're awesome, especially for log-odds). Earlier draft of this post ended at "the correct method for answering this query involves imagining world-where-H-is-true, imagining world-where-H-is-false and comparing the frequency of E between them", but I decided against it. And well, if some process involves X and Y, then it is correct (but maybe misleading) to say that in involves just X.

My point was that "what it does resemble?" (process where you go E -> H) was fundamentally different from "how likely is that?" (process where you go H -> E). If you calculate likelihood ratio using the-degree-of-resemblance instead of actual P(E|H) you will get wrong answer.

(Or maybe thinking about likelihood ratios will force you to snap out of representativeness heuristic, but I'm far from sure about it)

I think that I misjudged the level of my audience (this post is an expansion of /r/HPMOR/ comment) and hadn't made my point (that probabilistic thinking is more correct when you go H->E instead of vice versa) visible enough. Also, I was going to blog about likelihood ratios later (in terms of H->E and !H->E) — so again, wrong audience.

I now see some ways in which my post is debacle, and maybe it makes sense to completely rewrite it. So thank you for your feedback again.

It's interesting to note that this is almost exactly how it works in some role-playing games.

Suppose that we have Xandra the Rogue who went into dungeon, killed a hundred rats, got a level-up and now is able to bluff better and lockpick faster, despite those things having almost no connection to rat-killing.

My favorite explanation of this phenomenon was that "experience" is really a "self-esteem" stat which could be increased via success of any kind, and as character becomes more confident in herself, her performance in unrelated areas improves too.

Making sure I understood you: you are saying that people sometimes pick "everything is fine" because:

1) they are confident that if anything goes wrong, they would be able to fix it, so everything is fine once again

2) they are so confident in it they aren't making specific plans, beliving that they would be able to fix everything on the spur of the moment

aren't you?

Looks plausible, but something must be wrong there, because planning fallacy:

a) exists (so people aren't evaluating their abilities well)

b) exists even people aren't familiar with the situation they are predicting (here, people have no ground for "ah, I'm able to fix anything anyway" effect)

c) exists even in people with low confidence (however, maybe the effect is weaker here; it's an interesting theory to test)

I blame overconfidence and similar self-serving biases.

(decided to move everything to Main)

So, I made two posts sharing potentionally useful heuristics from Bayesianism. So what?

Should I move one of them to Main? On the one hand, these posts "discuss core Less Wrong topics". On the other, I'm honestly not sure that this stuff is awesome enough. But I feel like I should do something, so these things aren't lost (I tried to do a talk about "which useful principles can be reframed in a Bayesian terms" on a Moscow meetup once, and learned that those things weren't very easy to find using site-wide search).

Maybe we need a wiki page with a list of relevant lessons from probability theory, which can be kept up-to-date?

Good call!

Yes, your theory is more prosaic, yet it never occured to me. I wonder whether purposefully looking for boring explanations would help with that.

Also, your theory is actually plausible, fits with some of my observations, so I think that I should look into it. Thanks!

A conclusion which is true in any model where the axioms are true, which we know because we went through a series of transformations-of-belief, each step being licensed by some rule which guarantees that such steps never generate a false statement from a true statement.

I want to add that this idea justifies material implication ("if 2x2 = 4, then sky is blue") and other counter-intuitive properties of formal logic, like "you can prove anything, if you assume a contradiction/false statement".

Usual way to show the latter goes like this:

1) Assume that "A and not-A" are true

2) Then "A or «pigs can fly»" are true, since A is true

3) But we know that not-A is true! Therefore, the only way for "A or «pigs can fly»" to be true is to make «pigs can fly» true.

4) Therefore, pigs can fly.

The steps are clear, but this seems like cheating. Even more, this feels like a strange, alien inference. It's like putting your keys in a pocket, popping yourself on the head to induce short-term memory loss and then using your inability to remember keys' whereabouts to win a political debate. That isn't how humans usually reason about things.

But the thing is, formal logic isn't about reasoning about things. Formal logic is about preserving the truth; and if you assumed "A and not-A", then there is nothing left to preserve.

How Wikipedia puts it:

An argument (consisting of premises and a conclusion) is valid if and only if there is no possible situation in which all the premises are true and the conclusion is false.

Took the survey and reminded my fellow Russians to participate too.

Load More