Comment author: John_Maxwell_IV 20 March 2014 03:20:44AM 1 point [-]

My impression was that Kaj's essay was not original to him but rather inspired by the paper he linked to at the bottom.

Comment author: velisar 20 March 2014 01:50:38PM 0 points [-]

I edited for clarity, thanks.

Comment author: RichardKennaway 19 March 2014 05:01:56PM *  1 point [-]

But in our case, we need God to say that the nerves in the skin are thermometers, the eyes, height measuring tools and so on.

Historically, it has been the other way round. We can recognise, without the hypothesis of God, that legs are good for walking, eyes for seeing, and so on, and these observable facts were taken as proof of the existence of a Designer.

Having dispensed with the Designer, we are left with the problem of explaining why living organisms appear to be made of distinct parts serving clear functions, and how we are able to say that these functions are sometimes performed well and sometimes badly, how we can describe some processes as pathological and some as healthy.

ETA: The answer isn't "evolution!", because when we use evolutionary techniques to solve computational problems, the result is typically something that works but we can't see how. When we look at living organisms we see things that work that we largely can see how. (The brain is a notable exception. Also, protein folding. But a heart is clearly a pump.)

Comment author: velisar 19 March 2014 07:54:45PM 0 points [-]

True.

But is jealousy pathological? Or anger? Or fear?

I was arguing that the nerves in the skin are only an approximation of thermometers, likewise the eyes only a poor measure tool. By the way, there are 'evolutionary' biases: we perceive a ravine as deeper when we look down onto it and, conversely, from the bottom looking up it doesn't seem as tall (see also auditory looming). Their function is quite transparent once you think about organisms and not measure tools.

Comment author: RichardKennaway 19 March 2014 03:58:38PM 0 points [-]

Can you comment on how the concept of "ecological rationality" relates to this imaginary conversation?

Comment author: velisar 19 March 2014 04:28:05PM 0 points [-]

It seems to me that it is the discussion about optimizing versus satisficing.

If Intel builds the computer to do some division, but they found a way to approximate the results because that way the CPU can simulate, I don't know, a nuclear explosion, it should say so. But in our case, we need God to say that the nerves in the skin are thermometers, the eyes, height measuring tools and so on. The only utility function of organisms that we now for sure is that the code that build them has to make it in the next generation; we can argue about different strategies, but they depend on - sometimes - too many other things.

Comment author: Gunnar_Zarncke 19 March 2014 12:46:14PM 1 point [-]

Agreed. But it kind of means that some evolution of fallacies trending toward more complex argumentation patterns is taking place. Or? I'm not versed in the classics but I take it that they didn't have this large an (anti-)tool-set.

Comment author: velisar 19 March 2014 03:09:06PM 0 points [-]

I think any preoccupation, if it exists long enough, results in great refinements. The are people good a African rare languages, mineral water, all sorts of (noble!) sports, torture - why should't people get better at something as common as argumentation.

But we're advocating a look the other way around, to the more basic processes, they may say something about how humans work. And indeed, it would be easier with less sophisticated arguers.

Comment author: Viliam_Bur 19 March 2014 01:34:14PM *  5 points [-]

It is true that there are reasons for our biases; that human behavior was shaped by evolution and optimized for the natural environment. Many mistakes that we do are a result of behavior that contributes to survival in nature.

But I think that "contributes to survival" does not always lead to "solid inference rules". For example, imagine that a majority of the tribe is wrong about some factual question. (A question where being right or wrong is not immediately relevant for surviving.) It contributes to survival if an individual joins this majority, because it gets them allies. -- This could be excused by saying that in an ancient tribe without much specialization, a majority is more likely to be correct than an individual, therefore "follow the majority opinion" actualy is a good truth-finding heuristics. But that ignores the fact that people sometimes lie for a purpose, e.g. calumniate their opponents, or fabricate religious experience. So there is more in joining the majority than merely a decent truth-finding heuristics.

(EDIT: It's not like in the past humans lived in harmony with nature using their heuristics, and only today we have exploitable biases. People had exploitable biases even in the ancient environment -- their heuristics were correct often, but not always -- and people have exploited each other's biases even in the ancient environment. Not only we had adaptations to make mostly correct decisions, but also adaptations to exploit other people's flaws in the former adaptations.)

Also, no species is perfectly tuned to their environment. Some useful mutations simply didn't happen yet. Also, there are various trade-offs, so even if a species as a whole is optimized for given environment, some of their individual features may be suboptimal, as a price to improve other conflicting features. Therefore, assuming that every human bias is a result of a perfect behavior in the natural environment, would be assuming too much.

But otherwise, I like this.

Comment author: velisar 19 March 2014 02:55:19PM 0 points [-]

I have to admit that the text is a bit long! We sorta did say all of that you are saying, which means that the way I resumed the text here was a bit misleading.

There must be conditions when a heuristic like "follow the majority opinion" must be triggered in our heads: something is recognized maybe. There is selection pressure to find social exchange violation, but also to be ingenious in persuasion. Some of this already has experimental support. Anyway, we think that what we today call fallacies are not accidents - like the blind spot. They are good inference rules for a relatively stable environment, but cannot predict far into the future and cannot judge new complex problems. That may be why we don't spot the fallacies of small talk, of experts in domains with expertise, or in domains for which we already have intuitions.

That would imply that a bad decision today is not necessarily the product of a cognitive illusion, but that we build a bad interface for the actual human mind in the modern world (a car will be lighter and faster if it shouldn't accommodate humans). Reference class forecasting or presenting probabilities as frequencies are just technologies, interfaces. The science is about the function and the fallacies are interesting precisely because, presumably, they are a repetitive behavior. They may help in our effort to reverse engineer ourselves.

Comment author: Gunnar_Zarncke 19 March 2014 12:03:51PM *  1 point [-]

A quote from that paper:

If a style of argumentation has survived critics for millennia, we can ask several questions: Could it be that there are evolutionary programs running in our heads that systematically push us to do the same things? Are those based on inferences that correlate with good fitness? Where does epistemic value differ from ecologic utility? Do the fallacists have some observation bias; do we suffer from the Focusing illusion (Schkade & Kahneman, 1998) when observing a bad argument?

I have this heard called the fallacy fallacy (though rational wiki sees that differently).

Comment author: velisar 19 March 2014 12:14:02PM 0 points [-]

You are correct; but the Argument from fallacy is still pretty uninformative.

Comment author: fortyeridania 01 February 2014 07:13:43AM 0 points [-]

The one from Carnap ("Anything you can do, I can do meta") might not really be from Carnap. Can anyone find a source besides this one, which only gets it back to 1991?

Comment author: velisar 08 February 2014 09:03:59PM 0 points [-]

I think it's Daniel Dennet (said to Hofstadter).

Comment author: mwengler 23 September 2011 01:59:25PM 3 points [-]

I grew up in Long Island 20 miles from JFK airport. We could see the Concorde once in a while at JFK airport and if we were very lucky we would see it landing or taking off. The amount of mindspace in the world occupied by that beautiful plane was gigantic compared to that occupied by most other planes. Whether the Concorde was still a net deficit to the UK and France would require, I think, a calculation similar to figuring the deficit or surplus to the U.S. of putting people on the moon.

Comment author: velisar 28 January 2013 09:51:20AM *  0 points [-]

You might be right - as I never saw one - but the project didn't start with a plan to built a spectacular flying sculpture. So they fell first to the planning fallacy (which may not be so much a psychological cognitive bias but the very structure of possible outcomes of everything - the top of the frequency distribution is to the right of the "arrival" time), then to sunk costs which later were half acknowledged, thus making them highly suspicious of trying to resolve a cognitive dissonance (rationalization).

One has to take into account the original prediction to make a probabilistic interpretation...

Comment author: Virge2 13 May 2008 02:14:32PM 4 points [-]

Eliezer, I guess the answer you want is that "science" as we know it has at least one bias: a bias to cling to pragmatic pre-existing explanations, even when they embody confused thinking and unnecessary complications. This bias appears to produce major inefficiencies in the process.

Viewing science as a search algorithm, it follows multiple alternate paths but it only prunes branches when the sheer bulk of experimental evidence clearly favours another branch, not when an alternate path provides a lower cost explanation for the same evidence. For efficiency, science should instead prune (or at least allocate resources) based on a fair comparison of current competing explanations.

Science has a nostalgic bias.

Comment author: velisar 22 May 2012 08:27:00PM 1 point [-]

The science world, as much as the rest of the "worlds" comprised by people who share something which everybody cherishes, has to have the status quo bias. (the enigmatic add-on: One cannot escape the feeling that there is such thing as time)

Comment author: velisar 24 March 2012 11:31:18AM 6 points [-]

Kahneman suggests such an exercise for groups after pointing out that organizations generally act more rationally than individuals. The devil's advocate role and thinking at the worst possible outcome. We don't always have the luxury of having others near us for checking our thoughts. But we often have imaginary conversations with friends or parents. So it shouldn't be very difficult to assign a devil's advocate position to a imaginary voice. That should put in perspective the way we feel about the subject. It is a basic mean of delaying the strong coherence of the first good narrative.

Maybe it would be great to have an imaginary Bayesian friend...

View more: Next