pjeby comments on The Meditation on Curiosity - Less Wrong

36 Post author: Eliezer_Yudkowsky 06 October 2007 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (93)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: pjeby 03 February 2010 05:54:47AM *  2 points [-]

I was quoting Steven Pinker but my copy is an audio book so I can't give you the specific references to the study he mentions. A simple google search brings up plenty of references. (Google gives popularised summaries. Follow the links provided therein to find the actual research.)

Perhaps I'm missing something, but I don't see where it says we're all automatically afraid of snakes. I have seen research that monkeys have an inbuilt ability to learn to fear snakes, but the mechanism has to be switched on via learning, and my understanding is that humans are the same way... unless you are arguing that individual variations in fear of snakes is purely determined by genetics.

[Edit to add: one of the first papers you linked to includes this quote: "For studies of captive primates, King did not find consistent evidence of snake fear." And the second page goes on to describe the very "they have to learn to fear snakes" research that I previously spoke of.]

Given that contradictory reports are so freely available and your confidence in the model your are asserting I would have expected you to have a somewhat more broad exposure to the relevant science.

I think perhaps we are miscommunicating: I do not deny that primate brains contain snake detectors. I do deny that said detectors are unaffected by learning: humans and monkeys can and do learn which snakes to fear, or not fear.

Skinner had a similar 'simple' theory. But he was wrong. Not wrong because the mechanisms he described weren't important parts of human psychology but wrong because he asserted them to the exclusion of all else.

We seem to be miscommunicating again. What mechanism is it that you think I am asserting "to the exclusion of all else"? The model I personally use contains several mechanisms, and the moral injunctions aspect I spoke of here is only one such mechanism. It is certainly not the only relevant mechanism in human behavior, even in the relatively narrow field of applicability where I use it.

People can be afraid of heights even if they didn't make a habit of falling off cliffs in their childhood.

I don't do classical phobia work, actually, so I wouldn't have a valid opinon on that one, one way or the other. ;-)

Nevertheless, I would not necessarily believe your report on how these anxieties came into being.

It's certainly true that, In order to reach scientific standards, I would need to find a way to double-blindly substitute a placebo version of childhood memories for the real thing in order to prove that it's the modification of the memory that makes it work. (I have occasionally tested single-blind placebo substitutions on other things, but not this, as I have no idea what I could substitute.)

Mainly, what I do to test alternative hypotheses regarding a change technique is to see what parts of it I can remove, without affecting the results. Whatever's left, I assume has some meaning. (Side note: most published descriptions of actually-working self-help techniques contain superfluous steps, that, when removed, tend to make each technique sound like a mere minor variation on one of a handful of major themes... which I expect to correspond to mechanisms in the brain.)

In the instant discussion of moral injunctions, examining the memory of the learning or imprint experience appears to be indispensable, and therefore I conclude (hypothesize, if you prefer) that these memories are an integral part of the process of formation of moral injunction-regulated behavior.

I have a strong bias for you PJ, in all but your tendency to be quite rigidly minded when it comes to forcing reality into your simple models.

FWIW, I do not claim universal applicability of my models outside their target domain. However, within that target domain, most discussions here tend to have only vaporous speculation weighing against many, many tests and observations. When someone proposes a speculative and more complex model than one I am already using, I want to see what their model can predict that mine cannot, or vice versa.

If you have a more parsimonious model for "belief in belief" than simple moral injunctions regarding spoken behavior, I'd love to see it. But since "belief in belief" cleanly falls out as a side effect of my model, I don't see a reason to go looking for a more complicated, special-purpose belief module, just because there could be one. Should I encounter a client who needs a belief-in-belief fixed, and find that my existing model can't fix it, then I will have reason to go looking for an updated model.

Now, when I do see a more parsimonious model here than one I'm already using, I adopt it wholeheartedly. For all that people seem to frame me as having brought PCT to Lesswrong.com, the reverse is actually true:

lesswrong is where I heard about PCT in the first place!

And I adopted it because it fit very neatly into my existing model... it was as though my model was a graph with lots of edges, but no nodes, and PCT gave me a paradigm for what I should expect "nodes" to look like. (And incorporating it into my model also subsequently allowed me to discover a new kind of "edge" that I hadn't spotted previously.)

So actually, I don't consider PCT to be a comprehensive model in itself either, because it lacks the "edges" that my own model contains!

Which makes it a bit frustrating any time anyone acts as though I 1) brought PCT to LW, and 2) think it's a cure-all or even a remotely complete model of human behavior... it's just better than its competitors, such as the aforementioned Skinnerian model you mentioned.

I allow myself to vocally reject the parts of your comments that I disagree with because that way I will not be dismissed as a fan boy when I speak in your defense.

Great. I would appreciate it, though, if you not use boo lights like "mommy issues" and "PCT" (which sadly, seems to have become one around these parts), especially when the first is a denigratory caricature and the second not even relevant. (Moral injunctions are an "edge" in my own model, not a "node" from PCT.)

Comment deleted 03 February 2010 07:21:21AM [-]
Comment author: pjeby 03 February 2010 04:43:30PM *  1 point [-]

I do not agree that Occam suggests that fear of snakes, spiders and heights is the sole result of learned associations. I also do not agree that aversion to fundamental belief switching is purely the result of learning from trauma.

Of course not. I never claimed they were. I only make the claim that learning is an essential component of the moral injunction mechanism. You have to learn which beliefs not to switch, at the very least!

I've also described a variety of apparently built-in behaviors triggered by the mechanism: proselytizing, gossip, denouncing others, punishing non-punishers, feelings of guilt, etc. These are just as much built-in mechanisms as "snake detectors"... and monkeys appear to have some of them.

What I say is that, just like the snake detectors, these mechanisms require some sort of learning in order to be activated... and that evolutionarily, applying these mechanisms to behavior would be of primary importance; applying them to beliefs would have to come later, after language.

And at that point, it's far more parsimonious to assume evolution would reuse the same basic behavior-control mechanism, rather than implementing a new one specifically for "beliefs"... especially since, to the naive mind, "beliefs" are transparent. There's simply "how things are".

To an unsophisticated mind, someone who thinks things are different than "how things are" is obviously either crazy, or a member of an enemy tribe.

Not an "apostate".

Most of the behavior mechanisms involved are there for the establishment and maintenance of tribe behavioral norms, and were later memetically co-opted by religion. I quite doubt that religion or anything we'd consider a "belief system" (i.e., a set of non-reality-linked beliefs used for signalling) were what the mechanism was meant for.

IOW, ISTM the support systems for reality-linked belief systems had to have evolved first.

This is not a claim of exclusivity of mechanism, so I don't really know where you're getting that from. I'm only saying that I don't see the necessity for an independent belief-in-belief system to evolve, when the conditions that make use of it would not have arrived until well after a "group identity behavioral norms control enforcement" system was already in place, and the parsimonious assumption is that non-reality-linked beliefs would be at most a minor modification to the existing system.

Comment author: wedrifid 03 February 2010 05:37:31PM 0 points [-]

To an unsophisticated mind, someone who thinks things are different than "how things are" is obviously either crazy, or a member of an enemy tribe.

Not an "apostate".

No. I'm talking about apostasy. I'm not talking about someone who is crazy. I am not talking about a member of an enemy tribe. I am talking about someone from within the tribe who is, or is considering, changing their identifying beliefs to something that no longer matches the in-group belief system. This change in beliefs may be to facilitate joining a different tribe. It may be a risky play at power within the tribe. It may be to splinter off a new tribe from the current one.

Since we are talking in the context of religious beliefs the word apostate fits perfectly.

Comment author: pjeby 03 February 2010 06:02:36PM 0 points [-]

I am talking about someone from within the tribe who is, or is considering, changing their identifying beliefs to something that no longer matches the in-group belief system. This change in beliefs may be to facilitate joining a different tribe. It may be a risky play at power within the tribe. It may be to splinter off a new tribe from the current one.

In order for any of those things to be advantageous (and thus need countermeasures), you first have to have tribes... which means you already need behavior-based signaling, not just non-reality-linked "belief" signaling.

So I still don't see why postulating an entirely new, separate mechanism is more parsimonious than assuming (at most) a mild adaptation of the old, existing mechanisms... especially since the output behaviors don't seem different in any important way.

Can you explain why you think a moral injunction of "Don't say or even think bad things about the Great Spirit" is fundamentally any different from "Don't say 'no', that's rude. Say 'jalaan' instead," or "Don't eat with your left hand, that's dirty?"

In particular, I'd like to know why you think these injunctions would need different mechanisms to carry out such behaviors as disgust at violators, talking up the injunction as an ideal to conceal one's desire for non-compliance, etc.

Comment author: wedrifid 03 February 2010 07:02:50PM *  0 points [-]

If I were God I would totally refactor the code for humans and make it more DRY.

Comment author: pjeby 03 February 2010 07:29:50PM 0 points [-]

If I were God I would totally refactor the code for humans and make it more DRY.

You seem to be confusing "simplicity of design" with "simplicity of implementation". Evolution finds solutions that are easily reached incrementally -- those which provide an advantage immediately, rather than requiring many interconnecting pieces to work. This makes reuse of existing machinery extremely common in evolution.

It is also improbable that any selection pressure for non-reality-based belief-system enforcement would exist, until some other sort of reality-based behavioral norms system existed first, within which pure belief signaling would then offer a further advantage.

Ergo, the path of least resistance for incremental implementation simplicity, supports the direction I have proposed: first behavioral enforcement, followed by belief enforcement using the same machinery -- assuming there's actually any difference between the two.

I could be wrong, but it's improbable, unless you or someone else has some new information to add, or some new doubt to shed upon one of the steps in this reasoning.

Comment author: wedrifid 03 February 2010 07:49:56PM *  0 points [-]

You seem to be confusing "simplicity of design" with "simplicity of implementation". Evolution finds solutions that are easily reached incrementally -- those which provide an advantage immediately, rather than requiring many interconnecting pieces to work. This makes reuse of existing machinery extremely common in evolution.

I'm not and I know.

I could be wrong, but it's improbable, unless you or someone else has some new information to add, or some new doubt to shed upon one of the steps in this reasoning.

Earlier in this conversation you made the claim:

Er, research please. Everything I've seen shows that even monkeys have to learn to fear snakes and spiders - it has to be triggered by observing other monkeys being afraid of them first.

This suggested that if "everything you have seen" didn't include the many contrary findings then either you hadn't seen much or what you had seen was biased.

I really do not think new information will help us. Mostly because approximately 0 information is being successfully exchanged in this conversation.

Comment author: pjeby 03 February 2010 08:00:55PM 1 point [-]

This suggested that if "everything you have seen" didn't include the many contrary findings then either you hadn't seen much or what you had seen was biased. I really do not think new information will help us.

I still don't see what "contrary" findings you're talking about, because the first paper you linked to explicitly references the part where monkeys that grow up in cages don't learn to fear snakes. Ergo, fear of snakes must be learned to be activated, even though there appears to be machinery that biases learning in favor of associating aversion to snakes.

This supports the direction of my argument, because it shows how evolution doesn't create a whole new "aversive response to snakes" mechanism, when it can simply add a bias to the existing machinery for learning aversive stimuli.

In the same way, I do not object to the idea that we have machinery to bias learning in favor of mouthing the same beliefs as everyone else. I simply say it's not parsimonious to presume it's an entirely independent mechanism.

At this point, it seems to me that perhaps this discussion has consisted entirely of "violent agreement", i.e. both of us failing to notice that we are not actually disagreeing with each other in any significant way. I think that you have overestimated what I'm claiming: that childhood learning is an essential piece in moral and other signaling behavior, not the entirety of it... and I in turn may have misunderstood you to be arguing that an independent inbuilt mechanism is the entirety of it.

When in fact, we are both saying that both learning and inbuilt mechanisms are involved.

So, perhaps we should just agree to agree, and move on? ;-)

Comment author: wedrifid 04 February 2010 02:49:34AM *  0 points [-]

We differ in our beliefs on what evidence is available. I assert that it varies from 'a bias to learn to fear snakes' to 'snake naive monkeys will even scream with terror and mob a hose if you throw it in with them'. This depends somewhat on which primates are the subject of the study.

It does seem, however, that our core positions are approximately compatible, which leaves us with a surprisingly pleasant conclusion.

Comment author: Cyan 03 February 2010 07:24:23PM *  4 points [-]

In fairness, the "left hand" thing has to do with toilet hygiene pre-toilet-paper, so at one time it had actual health implications.

Comment author: pjeby 03 February 2010 07:33:55PM *  3 points [-]

In fairness, the "left hand" thing has to do with toilet hygiene pre-toilet paper, so at one time it had actual health implications.

That's why I brought it up - it's in the category of "reality-based behavior norms enforcement", which has much greater initial selection pressure (or support) than non-reality-based behavior norms enforcement.

Animals without language are capable of behavioral norms enforcement, even learned norms enforcement. It's not parsimonious to presume that religion-like beliefs would not evolve as a subset of speech-behavior norms enforcement, in turn as a subset of general behavior norms enforcement.

[Edit: removed "enfrorcement" typo]

Comment author: Cyan 03 February 2010 07:47:20PM 0 points [-]

I guess I was just pointing out that it seemed to be in a different category ("reality-based behavior norms enforcement" is as good a name as any) than the other examples.