Comment author: BiasedBayes 15 September 2015 02:03:59PM *  0 points [-]

Thanks for the post. I love it.

My comments:

First sidenote that dont assume that if something is a heuristic it is automatically a wrong way of thinking.(sorry if i misinterpret this, because you dont explicitly say this at all :) In some situations simple heuristics will outperform regression analysis for example.

But about your mainpoint. If I understood right this is actually a problem of violating so called "ratio rule".

(1) The degree to which c is representative of S is indicated by the conditional propability p (c | S)- that is, the propability of members of S have characterestic c.

(2) The propability that the characteristic c implies membership S is given by p (S | c). (Like you write)

(3) p (c | S) / p (S | c) = p(c) / p(S)

This is the Ratio Rule= Ratio of inverse propabilities equals the ratio of simple propabilities. So to equate these two propabilities p(c|S) and p(S|c) in the absence of equating ALSO the simple propabilitis is just wrong and bad thinking.

Representative thinking does not reflect these differences between p(c|S) and p(S|c) and introduces a symmetry in the map (thought) that does not exist in the world.

For example: "Home is the most dangerous place in the world because most accidents happen in home. So stay away from home!!!" --> This is confusion about the propability of accident given being home with propability being home given accident.

Comment author: BT_Uytya 16 September 2015 03:41:48PM *  0 points [-]

Thank you. English isn't my first language, so for me feedback means a lot. Especially positive :)

My point was that representative heuristic made two errors: firstly, it violates "ratio rule" (= equates P(S|c) and P(c|S)), and secondly, sometimes it replaces P(c|S) with something else. That means that the popular idea "well, just treat it as P(c|S) instead of P(S|c); if you add P(c|~S) and P(S), then everything will be OK " doesn't always work.

The main point of our disagreement seem to be this:

(1) The degree to which c is representative of S is indicated by the conditional propability p (c | S)- that is, the propability of members of S have characterestic c.

1) Think about stereotypes. They are "represent" their classes well, yet it's extremely unlikely to actually meet the Platonic Ideal of Jew.

(also, sometimes there is some incentive for members of ethnic group to hide their lineage; if so, then P(stereotypical characteristics|member of group) is extremely low, yet the degree of resemblance is very high)

(this is somewhat reminds me of the section about Planning Fallacy in my earlier post).

2) I think that it can be argued that the degree of resemblance should involve P(c|~S) in some way. If it's very low, then c is very representative of S, even if P(c|S) isn't high.


Overall, inferential distances got me this time; I'm probably going to rewrite this post. If you have some ideas about how this text could be improved, I will be glad to hear them.

Comment author: redding 14 September 2015 12:49:20AM 0 points [-]

Just to clarify, I feel that what you're basically saying that often what is called the base-rate fallacy is actually the result of P(E|!H) being too high.

I believe this is why Bayesians usually talk not in terms of P(H|E) but instead use Bayes Factors.

Basically, to determine how strongly ufo-sightings imply ufos, don't look at P(ufos | ufo-sightings). Instead, look at P(ufos | ufo-sightings) / P(no-ufos | ufo-sightings).

This ratio is the Bayes factor.

Comment author: BT_Uytya 15 September 2015 09:08:37PM 0 points [-]

Thank you for your feedback.

Yes, I'm aware of likelihood ratios (and they're awesome, especially for log-odds). Earlier draft of this post ended at "the correct method for answering this query involves imagining world-where-H-is-true, imagining world-where-H-is-false and comparing the frequency of E between them", but I decided against it. And well, if some process involves X and Y, then it is correct (but maybe misleading) to say that in involves just X.

My point was that "what it does resemble?" (process where you go E -> H) was fundamentally different from "how likely is that?" (process where you go H -> E). If you calculate likelihood ratio using the-degree-of-resemblance instead of actual P(E|H) you will get wrong answer.

(Or maybe thinking about likelihood ratios will force you to snap out of representativeness heuristic, but I'm far from sure about it)

I think that I misjudged the level of my audience (this post is an expansion of /r/HPMOR/ comment) and hadn't made my point (that probabilistic thinking is more correct when you go H->E instead of vice versa) visible enough. Also, I was going to blog about likelihood ratios later (in terms of H->E and !H->E) — so again, wrong audience.

I now see some ways in which my post is debacle, and maybe it makes sense to completely rewrite it. So thank you for your feedback again.

Comment author: Yvain 14 October 2010 07:03:21PM *  79 points [-]

On any task more complicated than sheer physical strength, there is no such thing as inborn talent or practice effects. Any non-retarded human could easily do as well as the top performers in every field, from golf to violin to theoretical physics. All supposed "talent differential" is unconscious social signaling of one's proper social status, linked to self-esteem.

A young child sees how much respect a great violinist gets, knows she's not entitled to as much respect as that violinist, and so does badly at violin to signal cooperation with the social structure. After practicing for many years, she thinks she's signaled enough dedication to earn some more respect, and so plays the violin better.

"Child prodigies" are autistic types who don't understand the unspoken rules of society and so naively use their full powers right away. They end out as social outcasts not by coincidence but as unconscious social punishment for this defection.

Comment author: BT_Uytya 10 April 2015 10:04:54PM *  2 points [-]

It's interesting to note that this is almost exactly how it works in some role-playing games.

Suppose that we have Xandra the Rogue who went into dungeon, killed a hundred rats, got a level-up and now is able to bluff better and lockpick faster, despite those things having almost no connection to rat-killing.

My favorite explanation of this phenomenon was that "experience" is really a "self-esteem" stat which could be increased via success of any kind, and as character becomes more confident in herself, her performance in unrelated areas improves too.

Comment author: cameroncowan 03 September 2014 09:32:58PM 1 point [-]

I like your example but there is additional evidence that could be gathered to refine your premise. You can check the traffic situation along your route and make summations about travel time. So there is a chance, given additional tools to up the chances of "everything is fine" to be the more likely scenario over not. I think this is especially true for those of us that drive cars. If you and I decide to go to the Denver Art Museum and you are coming from a hotel in downtown Denver and I'm driving from my house out of town whether I'm gong to be on time or not depends on all the factors you mentioned. However, I can mitigate some of those factors by adding data. I can do the same thing for you by empowering you with a map or by guiding you towards a tool like Google maps to get you from your hotel to the museum more efficiently. I think when you live someplace for a time and you make a trip regularly you get used to certain ideas about your journey which is why "everything is fine" is usually picked by people. To try to compensate for every eventuality is mind-numbing. However, I think making proper use of tools to make things as efficient as possible is also a good idea.

However, I am very much in favor of this line of thinking.

Comment author: BT_Uytya 04 September 2014 09:00:39PM *  1 point [-]

Making sure I understood you: you are saying that people sometimes pick "everything is fine" because:

1) they are confident that if anything goes wrong, they would be able to fix it, so everything is fine once again

2) they are so confident in it they aren't making specific plans, beliving that they would be able to fix everything on the spur of the moment

aren't you?

Looks plausible, but something must be wrong there, because planning fallacy:

a) exists (so people aren't evaluating their abilities well)

b) exists even people aren't familiar with the situation they are predicting (here, people have no ground for "ah, I'm able to fix anything anyway" effect)

c) exists even in people with low confidence (however, maybe the effect is weaker here; it's an interesting theory to test)

I blame overconfidence and similar self-serving biases.

Comment author: BT_Uytya 26 August 2014 05:57:42PM 6 points [-]

So, I made two posts sharing potentionally useful heuristics from Bayesianism. So what?

Should I move one of them to Main? On the one hand, these posts "discuss core Less Wrong topics". On the other, I'm honestly not sure that this stuff is awesome enough. But I feel like I should do something, so these things aren't lost (I tried to do a talk about "which useful principles can be reframed in a Bayesian terms" on a Moscow meetup once, and learned that those things weren't very easy to find using site-wide search).

Maybe we need a wiki page with a list of relevant lessons from probability theory, which can be kept up-to-date?

Comment author: BT_Uytya 02 September 2014 09:47:44PM 1 point [-]

(decided to move everything to Main)

Comment author: BT_Uytya 26 August 2014 05:57:42PM 6 points [-]

So, I made two posts sharing potentionally useful heuristics from Bayesianism. So what?

Should I move one of them to Main? On the one hand, these posts "discuss core Less Wrong topics". On the other, I'm honestly not sure that this stuff is awesome enough. But I feel like I should do something, so these things aren't lost (I tried to do a talk about "which useful principles can be reframed in a Bayesian terms" on a Moscow meetup once, and learned that those things weren't very easy to find using site-wide search).

Maybe we need a wiki page with a list of relevant lessons from probability theory, which can be kept up-to-date?

Comment author: Bobertron 25 August 2014 11:18:33AM 4 points [-]

I know it's just an example, but concerning

I find it hard to do something I consider worthwhile while on a spring break

maybe you have learned to be lazy on spring break? I mean, the theory that it's a habit seems more prosaic to me than being tired or something about "activasion energy".

Comment author: BT_Uytya 25 August 2014 12:18:46PM 2 points [-]

Good call!

Yes, your theory is more prosaic, yet it never occured to me. I wonder whether purposefully looking for boring explanations would help with that.

Also, your theory is actually plausible, fits with some of my observations, so I think that I should look into it. Thanks!

Comment author: BT_Uytya 08 February 2014 09:05:34PM 1 point [-]

A conclusion which is true in any model where the axioms are true, which we know because we went through a series of transformations-of-belief, each step being licensed by some rule which guarantees that such steps never generate a false statement from a true statement.

I want to add that this idea justifies material implication ("if 2x2 = 4, then sky is blue") and other counter-intuitive properties of formal logic, like "you can prove anything, if you assume a contradiction/false statement".

Usual way to show the latter goes like this:

1) Assume that "A and not-A" are true

2) Then "A or «pigs can fly»" are true, since A is true

3) But we know that not-A is true! Therefore, the only way for "A or «pigs can fly»" to be true is to make «pigs can fly» true.

4) Therefore, pigs can fly.

The steps are clear, but this seems like cheating. Even more, this feels like a strange, alien inference. It's like putting your keys in a pocket, popping yourself on the head to induce short-term memory loss and then using your inability to remember keys' whereabouts to win a political debate. That isn't how humans usually reason about things.

But the thing is, formal logic isn't about reasoning about things. Formal logic is about preserving the truth; and if you assumed "A and not-A", then there is nothing left to preserve.

How Wikipedia puts it:

An argument (consisting of premises and a conclusion) is valid if and only if there is no possible situation in which all the premises are true and the conclusion is false.

Comment author: topynate 27 December 2013 12:50:18AM 0 points [-]

I can't find it by search, but haven't you stated that you've written hundreds of KLOC?

Comment author: BT_Uytya 27 December 2013 08:17:18AM 1 point [-]
Comment author: BT_Uytya 15 December 2013 02:24:31PM 6 points [-]

Took the survey and reminded my fellow Russians to participate too.

View more: Next