Introduction
When people on LW want to explain a bias, they often turn to Evolutionary psychology. For example, Lukeprog writes
Human reasoning is subject to a long list of biases. Why did we evolve such faulty thinking processes? Aren't false beliefs bad for survival and reproduction?
I think that ''evolved faulty thinking processes'' is the wrong way to look at it and I will argue that some biases are the consequence of structural properties of the brain, which 'cannot' be affected by evolution.
Brain structure and the halo effect
I want to introduce a simple model, which relates the halo effect to a structural property of the brain. My hope is that this approach will be useful to understand the halo effect more systematically and shows that thinking in evolutionary terms is not always the best way to think about certain biases.
One crucial property of the brain is that it has to map a (essentially infinite) high-dimensional reality onto a finite low-dimensional internal representation. (If you know some Linear Algebra, you can think of this as a projection from a high-dimensional space into a low-dimensional space.) This is done more or less automatically by the limitation of our senses and brain's structure as a neural network.
An immediate consequence of this observation is that there will be many states of the world, which are mapped to an almost identical inner representation. In terms of computational efficiency it makes sense to use overlapping set of neurons with similar activation level to represent similar concepts. (This is also a consequence of how the brain actually builds representations from sense inputs.)
Now compare this to the following passage from here.
The halo effect is that perceptions of all positive traits are correlated. Profiles rated higher on scales of attractiveness, are also rated higher on scales of talent, kindness, honesty, and intelligence.
This shouldn't be a surprise, since 'positive' ('feels good') seems to be one of the evolutionary hard-wired concepts. Other concepts that we acquire during our life and associate with positive emotions, like kindness and honesty are mapped to 'nearby' neural structures. When one of those mental structures is activated, the 'closed ones' will be activated to a certain degree as well.
Since we differentiate concepts more when we are learning about a subject, the above reasoning should imply that children and people with less education in a certain area should be more influenced by this (generalized) halo effect in that area.
Conclusion
Since evolution can only modify the existing brain structure but cannot get away from the neural network 'design', the halo effect is a necessary by-product of human thinking. But the degree of 'throwing things in one pot' will depend on how much we learn about those things and increase our representation dimensionality.
My hope is that we can relief evolution from the burden of having to explain so many things and focus more on structural explanations, which provide a working model for possible applications and a better understanding.
PS: I am always grateful for feedback!
This is perhaps an example of when understanding a formal cause, in this case logical truths about certain machine learning architectures, is more enlightening than understanding an efficient cause, in this case contingent facts about evolutionary dynamics. It is generally the case that formal-causal explanations are more enlightening than efficient-causal explanations, but efficient-causal explanations are generally easier to discover, which is why the sciences are so specialized for understanding efficient causes. There are sometimes trends towards a more form-oriented approach, e.g. cybernetics, complexity sciences, aspects of evo-devo, and so on, but they're always on the edge of what is possible with traditional scientific methods and thus their particular findings are unfortunately often afflicted with an aura of unrigor.
Of note is that the only difference between "causal" decision theory and "timeless" decision theory is that the latter's description emphasizes the taking-into-account of formal causes, which is only implicit in any technically-well-founded causal decision theory and is for some unfathomable reason completely ignored by academic decision theorists. (If you get down to the level of an actually formalized decision theory then you're working with Markovian causality, where as far as I can discern CDT and TDT are no different.)
I am still reading through the older posts on LW and haven't seen CDT ot TDT yet (or haven't recognized it), but when I do, I will reread your comment and will hopefully understand how the second part of the comment is connected to the first...