Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
majus30

The quote on conflict reminds me of Jaak Panksepp's "Affective Neuroscience: The Foundations of Human and Animal Emotions", or a refracted view of it presented in John Gottman's book, "The Relationship Cure". Panksepp identifies mammalian emotional command systems he names FEAR, SEEKING, RAGE, LUST, CARE, PANIC/GRIEF, PLAY; Gottman characterizes these systems as competing cognitive modules: Commander-in-chief, Explorer, Sentry, Energy Czar, Sensualist, Jester or Nest Builder. It is tempting now to think of them as very high-level controllers in the hierarchy.

majus70

Why is "be specific" a hard skill to teach?

I think it is because being specific is not really the problem, and by labeling it as such we force ourselves into a dead-end which does not contain a solution to the real problem. The real problem is achieving communication. By 'achieving communication', I mean that concepts in one mind are reproduced with good fidelity in another. By good fidelity, I mean that 90% (arbitrary threshold) of assertions based on my model will be confirmed as true by yours.

There are many different ways that the fidelity can be low between my model and yours:

  • specific vs abstract

  • mismatched entity-relationship semantic models

  • ambiguous words

  • vague concepts

Surely there are many more.

Examples of what I mean by these few:

  • specific vs abstract: dog vs toy chihuahua puppy

  • model mismatch: A contract lawyer, a reservoir modeler, and a mud-logger are trying to share the concept "well". Their models of what a "well" is have some attributes with similar names, but different meanings and uses, like "name" or "location". To the mud-logger, a well is a stack of physical measurements of the drilling mud sampled at different drilling depths. To the lawyer, a well is a feature of land-use contracts, service contracts, etc.

Another kind of model mismatch: I think of two entities as having a "has-a" relationship. A house "has" 0 or 1 garages (detached). But you think of the same two entities using a mixin pattern: a house can have or not have garage attributes (not detached). "I put my car in the house" makes no sense to me, because a car goes in a garage but not in a house, but might make sense to you for a house with a built-in garage. We may go a long time before figuring out that my "house" isn't precisely the same as yours.

  • ambiguity: I'm a cowboy, you're an artist. We are trying to share the concept "draw". We can't because the concept doesn't equate.

  • vagueness: I say my decision theory "one-boxes". You have no idea what that means, but you create a place-holder for it in your model. So on some level you feel like you understand, but if you drill down, you can get to a point where something important is not defined well enough to use.

It is difficult to know when something that is transparent to you is being misrepresented in my head based on how you explain it to me. "I know you think you understand what you thought I said, but I'm not sure you're aware that what I said was not what I meant."

I suggest an exercise/game to train someone to detect and avoid these pitfalls: combine malicious misunderstanding (you tell me to stick the pencil in the sharpener and I insert the eraser end) and fidelity checking.

  1. You make an assertion about your model.

  2. I generate a challenge that is in logical agreement with your assertionss, but which I expect will fail to match your actual model. If I succeed, I get a point.

Repeat, until I am unable to create a successful challenge.

The longer it takes you to create an airtight set of assertions, the more I get.

Then we switch roles.

So I am looking for all the ways your model might be ill-defined, and all the ways your description might be ambiguous or overly abstract. You are trying to cement all of those gaps as parsimoniously as possible.

I've left the hardest part for last: the players need to be supplied with a metaphoric tinkertoy set of model parts. The parts need to support all of the kinds of fidelity-failure we can think of. And the set should be exensible, for when we think of more.

majus50

In Pinker's book "How the Mind Works" he asks the same question. His observation (as I recall) was that much of our apparently abstract logical abilities are done by mapping abstractions like math onto evolved subsystems with different survival purposes in our ancestors: pattern recognition, 3D spatial visualization, etc. He suggests that some problems seem intractable because they don't map cleanly to any of those subsystems.

majus100

I like the pithy description of halo bias. I don't like or agree with Mencken's non-nuanced view of idealists. it's sarcastically funny, like "a liberal is one who believes you can pick up a dog turd by the clean end", but being funny doesn't make it more true.

majus20

I'm actively interested in optimizing my health, and I take a number of supplements to that end. The survey would seem most interesting if its goal was to find how to optimize your health via supplements. As it turns out, none of the ones I take qualify as "minerals". If it turns out in fact that taking Vitamin XYZ is the single best thing you can do to tweak your diet, then this survey's conclusions, whatever they turn out to be (eg. that Calcium is better than Selenium) will be misleading. Maybe that's the next survey.

FYI, I'm taking: vitamin C, green tea extract, acetyl-l carnitine, vitamin D-3, fish oil, ubiquinol, and alpha lipoic acid. I've stopped taking vitamin E and aspirin.

majus20

The discussions about signalling reminded me of something in "A Guide To The Good Life" (a book about stoicism by William Irvine). I remembered a philospher who wore shabby clothes, but when I went looking for the quote, what I found was: "Cato consciously did things to trigger the disdain of other people simply so he could practice ignoring their disdain." In stoicism, the utility which is to be maximized is a personal serenity which flows from your secure knowledge that you are spending your life pursuing something genuinely valuable.

majus30

I am trying to be more empathetic with someone, and am having trouble understanding their behavior. They practice the "stubborn fundamental attribution error": someone who does not in fact behave as expected (as this individual imagines she would behave in their place) is harshly judged (neurotic, stupid, lazy, etc.). Any attempts to help her put herself in another's shoes are implacably resisted. Any explanations which might dispel harsh judgement are dismissed as "justifications". One example which I think is related is what I'll call "metaphor blindness". A metaphor that I expect would clarify the issue, the starkest example of which is a reductio ad absurdum, is rejected out of hand as being "not the same" or "not relevant". In abstract terms, my toolkit for achieving consensus or exploring issues rationally has been rendered useless.

Two questions: does my concept of "metaphor blindness" seem reasonable? And...how can I be more empathetic in this case? I'm being judgemental of her, by my own admission. What am I not seeing?

majus40

reminds me of:

"I know that you believe you understand what you think I said, but I'm not sure you realize that what you heard is not what I meant." --Robert McCloskey

majus10

This seems to be an argument about definitions. To me, Friedman's "average out" means a measurable change in a consistent direction, e.g. significant numbers of random individuals investing in gold. So, given some agents acting in random directions mixed with other agents acting in the same (rational) direction, you can safely ignore the random ones. (He argued.) I don't think he meant to imply that in the aggregate people are rational. But even in the simplified problem-space in which it appears to make sense, Friedman's basic conclusion, that markets are rational (or 'efficient'), has been largely abandoned since the mid 1980s. Reality is more complex.

majus280

I have two major comments. First, I took the Scientology Communications class 35 years ago in Boston, and it was basically the same as what has just been described. That's impressive, in a creepy kind of way.

Second, my strongest take-away from the class I took was in response to something NOT mentioned above, so this aspect may have changed. We were given a small book, something like "The History of Scientology". (This is not the huge "Dianetics" book.) We were told to read it on our own, until we understood it, and would move on to the later activities in the class only after attesting that we had done so. The book was loaded with very vague terms, imprecise at best, contrary to familiar usage at worst, but we were not allowed to discuss their meaning with anyone else, or ask instructors for insight. We had to construct a self-consistent interpretation in isolation, and comparing our own with anyone else's was effectively forbidden in perpetuity. So each student auto-brainwashed. I was impressed by the power of this technique.

Load More