Comment author: iDante 28 June 2013 03:30:20PM 1 point [-]

No, it doesn't. While the current structure of mathematics curricula might not be ideal, the solution won't be found by the means outlined in this post.

It is clear that spaced repetition makes learning material much easier. Start there.

Comment author: saph 28 June 2013 05:23:12PM 0 points [-]

I am not sure, whether we actually have a disagreement here. Spaced repetition is a special facet of the idea that I outline in this article and I am currently experimenting with it exactly for the reason to test my "theory" above.

Comment author: Dmytry 21 February 2012 05:28:07PM *  0 points [-]

Well, the beauty is positive quality for men who believe prettier women are stupider. One need to be careful not to start redefining positive qualities as those that correlate positively with each other.

So what would be your other example of halo effect? USA tends to elect taller people for presidents, yet I don't think many have trouble with concept that extreme tallness correlates negatively with health. I can't really think of much halo effects, apart from other effects like e.g. if you pick someone based on one quality you rationalize other qualities as good, or if you are portraying other people you'll portray those you dislike as all around negative and those you like as all around positive (which will bias anyone who's relying on this to infer correlations).

I think the bigger issue is when we prepare problems for effective reasoning. Every number should be a statistical distribution of it's possible values, yet it's very unwieldy to compute and we assign a definite number, or normal distribution. That is usually harmless but can result in gross error. There's whole spectra of colours, but nearby colours are confused, and there's artificial gradation of colours into bins. That kind of thing.

Comment author: saph 21 February 2012 07:16:18PM 0 points [-]

So what would be your other example of halo effect?

I haven't said that I have other examples of the halo effect, but examples of other biases which can also be explained by properties of how the brain processes sense inputs.

Comment author: Kenoubi 20 February 2012 04:30:22PM 2 points [-]

I think that ''evolved faulty thinking processes'' is the wrong way to look at it and I will argue that some biases are the consequence of structural properties of the brain, which 'cannot' be affected by evolution.

The structure can be affected by evolution, it's just too hard (takes too many coordinated mutations) to get to a structure that actually works better. I think you recognize this by your use of scare quotes, but you would be better off stating it explicitly. This is the flip side of the arguments I think you're alluding to, that the faulty thinking was actually beneficial in the EEA.

There must be an evolutionary explanation for the properties of the brain, but that doesn't mean we need to actually figure out that evolutionary explanation to understand the current behavior. Just like there must be an explanation in terms of physics, but trying to analyze every particle will clearly get us nowhere.

In fact, if you can find an explanation of a phenomenon in terms of current brain structure, I think that screens off evolutionary explanations as mere history (as long as you've really verified that the structure exists and explains the phenomenon).

I do think we're getting sidetracked by your halo effect example, though -- it might be useful to give three or four examples to avoid this (although if each one has a different explanation, that might substantially increase the effort of presenting your idea).

Comment author: saph 20 February 2012 06:46:55PM *  0 points [-]

This is the flip side of the arguments I think you're alluding to, that the faulty thinking was actually beneficial in the EEA.

Yes. Some people I know, observe fact X about human behaviour and then conclude that it had to be beneficial for survival, for otherwise evolution would have eradicated X.

I do think we're getting sidetracked by your halo effect example, though -- it might be useful to give three or four examples to avoid this (although if each one has a different explanation, that might substantially increase the effort of presenting your idea).

My original plan was to give several examples of biases with different explanations, but since this is my first attempt to do something productive on LW, I decided to write a short article and get some feedback first. So, thanks for your suggestions!

Comment author: Dmytry 19 February 2012 10:18:09PM *  1 point [-]

The point is not that it necessarily happens, the point is that if the larger space is mapped to a smaller space, that's by itself doesn't mean there will be [unwanted] collisions. The very same software could do lower-case string matching which 'confuses' lower and upper case, using the hashes.

The collisions between multiple good qualities - well that does not even happen for every person on the earth in the way that is outlined in the article - there definitely are people who think that e.g. pretty people must be stupid, which is btw more wrong than pretty people being smarter (due to health's effect on both intelligence and look). It could well be that people are falling for some sort of just world fallacy in one or other way, rather than literally mixing up the good looks and intelligence.

edit: and note all the cultural priming. The heros are smart, nice, handsome, brave, et cetera. The villains are bad on all counts. We are constantly watching biased data, and perhaps are inferring some correlation from this data. I think, though, there are people who believe good looks correlate with stupidity. That is the default assumption about women.

Comment author: saph 20 February 2012 10:42:02AM *  1 point [-]

I think you are right that there don't have to be collisions (in practice) if the representation space is big enough and has sufficient high dimension. On the other hand there is a metric aspect involved in the way the brain maps its data, which is not present in hash code (as far as I know). This reduces the effective dimension of the brain dramatically and I would guess that it is nowhere near 128 (as in your hash example) for the properties 'good looking', 'honest', etc. It would be an interesting research project to find out.

I think that the cultural aspect you mention might play a significant role. As I wrote in another comment, my goal here was not to give a full explanation of the halo effect... But I don't think that your 'beautiful women are stupid' example undermines the general idea, since for those people 'beauty' doesn't seem like a 'positive' concept and we wouldn't expect it to correlate with intelligence therefore. But I am not defending the 'halo effect' anyway. I chose it as an example to highlight the main idea and I might as well have chosen another bias.

Comment author: Will_Newsome 19 February 2012 01:19:08AM 1 point [-]

This is perhaps an example of when understanding a formal cause, in this case logical truths about certain machine learning architectures, is more enlightening than understanding an efficient cause, in this case contingent facts about evolutionary dynamics. It is generally the case that formal-causal explanations are more enlightening than efficient-causal explanations, but efficient-causal explanations are generally easier to discover, which is why the sciences are so specialized for understanding efficient causes. There are sometimes trends towards a more form-oriented approach, e.g. cybernetics, complexity sciences, aspects of evo-devo, and so on, but they're always on the edge of what is possible with traditional scientific methods and thus their particular findings are unfortunately often afflicted with an aura of unrigor.

Of note is that the only difference between "causal" decision theory and "timeless" decision theory is that the latter's description emphasizes the taking-into-account of formal causes, which is only implicit in any technically-well-founded causal decision theory and is for some unfathomable reason completely ignored by academic decision theorists. (If you get down to the level of an actually formalized decision theory then you're working with Markovian causality, where as far as I can discern CDT and TDT are no different.)

Comment author: saph 19 February 2012 08:38:08PM *  0 points [-]

I am still reading through the older posts on LW and haven't seen CDT ot TDT yet (or haven't recognized it), but when I do, I will reread your comment and will hopefully understand how the second part of the comment is connected to the first...

Comment author: asr 19 February 2012 06:27:02PM *  1 point [-]

I think that formulating this in terms of linear algebra is not always as illuminating as explaining it in terms of structure.

The way neural nets work, related concepts get wired together, and therefore cross-activate each other. To re-use your example, because we often activate various positive things alongside the more general notion of positiveness, you'd expect some coupling even between unrelated positive concepts.

Comment author: saph 19 February 2012 08:34:08PM 0 points [-]

Thanks for the feedback!

The reference to linear algebra should only show, that there have to be states which are mapped to similar representations, even if we don't know a priory which ones will be correlated.

But if we now look closer at the structure of the brain as a neural network and the learning mechanisms involved, then I think that we could expect positive concepts to be correlated by cross activation, as you explained.

The point of the article is not to come up with a perfect explanation for how the halo effect is actually caused, but to show that there doesn't have to be an evolutionary reason for it to evolve, besides the 'obvious' one that pwno mentions in his comment.

Comment author: saph 12 October 2011 11:43:21AM 1 point [-]

And letters are nothing more than ink. How can consciousness arise from mere neurons ? The same way that the meaning of a text can arise from mere letters.

I am not sure if this is a good analogy. The meaning of text is usually not hidden somewhere in the letters. Most of it is in the brain of the writer/reader. (But I agree that some meaning can be read out from a text without much previous knowledge.)

Comment author: Armok_GoB 16 July 2011 12:24:20PM 9 points [-]

If you're driving a large vehicle it is much heavier than your brain, does this mean identifying with the tiny lump of flesh rather than the tons and tons of steel is a terrible mistake?

Comment author: saph 19 July 2011 08:44:36AM 4 points [-]

While I agree with your comment, I have an observation to make. While driving a car, I found it quite useful to consider the car as an extended part of my body. The same is true for spoons, knives and forks while eating.

Comment author: saph 09 July 2011 02:45:38PM *  3 points [-]

Hi,

  • Handle: saph
  • Location: Germany (hope my English is not too bad for LW...)
  • Birth: 1983
  • Occupation: mathematician

I was thinking quite a lot for myself about topics like

  • understanding and mind models
  • quantitative arguments
  • scientific method and experiments
  • etc...

and after discovering LW some days ago I have tried to compare my "results" to the posts here. It was interesting to see that many ideas I had so far were also "discovered" by other people but I was also a little bit proud that I have got so far on my own. Probably this is the right place for me to start reading :-).

I am an Atheist, of course, but cannot claim many other standard labels as mine. Probably "a human being with a desire to understand as much of the universe as possible" is a good approximation. I like learning and teaching, which is why I am interested in artificial intelligence. I am surrounded by people with strange beliefs, which is why I am interested in learning methods on how to teach someone to question his/her beliefs. And while doing so, I might discover the one or other wrong assumption in my own thinking.

I hope to spend some nice time here and probably I can contribute something in the future...