So what would be your other example of halo effect?
I haven't said that I have other examples of the halo effect, but examples of other biases which can also be explained by properties of how the brain processes sense inputs.
This is the flip side of the arguments I think you're alluding to, that the faulty thinking was actually beneficial in the EEA.
Yes. Some people I know, observe fact X about human behaviour and then conclude that it had to be beneficial for survival, for otherwise evolution would have eradicated X.
I do think we're getting sidetracked by your halo effect example, though -- it might be useful to give three or four examples to avoid this (although if each one has a different explanation, that might substantially increase the effort of presenting your idea).
My original plan was to give several examples of biases with different explanations, but since this is my first attempt to do something productive on LW, I decided to write a short article and get some feedback first. So, thanks for your suggestions!
I think you are right that there don't have to be collisions (in practice) if the representation space is big enough and has sufficient high dimension. On the other hand there is a metric aspect involved in the way the brain maps its data, which is not present in hash code (as far as I know). This reduces the effective dimension of the brain dramatically and I would guess that it is nowhere near 128 (as in your hash example) for the properties 'good looking', 'honest', etc. It would be an interesting research project to find out.
I think that the cultural aspect you mention might play a significant role. As I wrote in another comment, my goal here was not to give a full explanation of the halo effect... But I don't think that your 'beautiful women are stupid' example undermines the general idea, since for those people 'beauty' doesn't seem like a 'positive' concept and we wouldn't expect it to correlate with intelligence therefore. But I am not defending the 'halo effect' anyway. I chose it as an example to highlight the main idea and I might as well have chosen another bias.
I am still reading through the older posts on LW and haven't seen CDT ot TDT yet (or haven't recognized it), but when I do, I will reread your comment and will hopefully understand how the second part of the comment is connected to the first...
Thanks for the feedback!
The reference to linear algebra should only show, that there have to be states which are mapped to similar representations, even if we don't know a priory which ones will be correlated.
But if we now look closer at the structure of the brain as a neural network and the learning mechanisms involved, then I think that we could expect positive concepts to be correlated by cross activation, as you explained.
The point of the article is not to come up with a perfect explanation for how the halo effect is actually caused, but to show that there doesn't have to be an evolutionary reason for it to evolve, besides the 'obvious' one that pwno mentions in his comment.
And letters are nothing more than ink. How can consciousness arise from mere neurons ? The same way that the meaning of a text can arise from mere letters.
I am not sure if this is a good analogy. The meaning of text is usually not hidden somewhere in the letters. Most of it is in the brain of the writer/reader. (But I agree that some meaning can be read out from a text without much previous knowledge.)
While I agree with your comment, I have an observation to make. While driving a car, I found it quite useful to consider the car as an extended part of my body. The same is true for spoons, knives and forks while eating.
Hi,
I was thinking quite a lot for myself about topics like
and after discovering LW some days ago I have tried to compare my "results" to the posts here. It was interesting to see that many ideas I had so far were also "discovered" by other people but I was also a little bit proud that I have got so far on my own. Probably this is the right place for me to start reading :-).
I am an Atheist, of course, but cannot claim many other standard labels as mine. Probably "a human being with a desire to understand as much of the universe as possible" is a good approximation. I like learning and teaching, which is why I am interested in artificial intelligence. I am surrounded by people with strange beliefs, which is why I am interested in learning methods on how to teach someone to question his/her beliefs. And while doing so, I might discover the one or other wrong assumption in my own thinking.
I hope to spend some nice time here and probably I can contribute something in the future...
I am not sure, whether we actually have a disagreement here. Spaced repetition is a special facet of the idea that I outline in this article and I am currently experimenting with it exactly for the reason to test my "theory" above.