Meetup : West LA: Availability Cascades and Risk Regulation

1 abramdemski 13 June 2015 08:09PM

Discussion article for the meetup : West LA: Availability Cascades and Risk Regulation

WHEN: 17 June 2015 07:00:00PM (-0700)

WHERE: 11066 santa monica blvd, la, ca

How to Find Us: Go into this Del Taco. We will be in the back room if possible.

Parking is free in the lot out front or on the street nearby.

Discussion: The availability heuristic allows us to estimate probabilities based on how available things are to our memory. This is not a terrible heuristic, but it is not perfect, either. Intuitive estimates of frequencies of different crimes tend to correspond better to how often those crimes are reported in the news, than to how often they actually occur. One particularly troubling aspect of this is the availability cascade, a chain reaction which elevates a group's probability estimate like a rising tide: each mention of a subject makes other people more likely to be concerned and mention it to others; each news report makes the subject more popular and makes other news sources more likely to mention it as well. This has a potential to alter public policy, when policy-makers apply the availability heuristic themselves or respond to the probabilities as estimated by the larger group.

Recommended Reading:

No prior exposure to Less Wrong is required; this will be generally accessible.

Discussion article for the meetup : West LA: Availability Cascades and Risk Regulation

Epistemic Trust: Clarification

18 abramdemski 13 June 2015 07:29PM

Cross-posted to my blog.


A while ago, I wrote about epistemic trust. The thrust of my argument was that rational argument is often more a function of the group dynamic, as opposed to how rational the individuals in the group are. I assigned meaning to several terms, in order to explain this:

Intellectual honesty: being up-front not just about what you believe, but also why you believe it, what your motivations are in saying it, and the degree to which you have evidence for it.

Intellectual-Honesty Culture: The norm of intellectual honesty. Calling out mistakes and immediately admitting them; feeling comfortable with giving and receiving criticism.

Face Culture: Norms associated with lack of intellectual honesty. In particular, a need to save face when one's statements turn out to be incorrect or irrelevant; the need to make everyone feel included by praising contributions and excusing mistakes.

Intellectual trust: the expectation that others in the discussion have common intellectual goals; that criticism is an attempt to help, rather than an attack. The kind of trust required to take other people's comments at face value rather than being overly concerned with ulterior motives, especially ideological motives. I hypothesized that this is caused largely by ideological common ground, and that this is the main way of achieving intellectual-honesty culture.

There are several subtleties which I did not emphasize last time.

  • Sometimes it's necessary to play at face culture. The skills which go along with face-culture are important. It is generally a good idea to try to make everyone feel included and to praise contributions even if they turn out to be incorrect. It's important to make sure that you do not offend people with criticism. Many people feel that they are under attack when engaged in critical discussion. Wanting to work against this is not an excuse for ignoring it.
  • Face culture is not the error. Being unable to play the right culture at the right time is the error. In my personal experience, I've seen that some people are unable to give up face-culture habits in more academic settings where intellectual honesty is the norm. This causes great strife and heated arguments! There is no gain in playing for face when you're in the midst of an honesty culture, unless you can do it very well and subtly. You gain a lot more face by admitting your mistakes! On the other hand, there's no honor in playing for honesty when face-culture is dominant. This also tends to cause more trouble than it's worth.
  • It's a cultural thing, but it's not just a cultural thing. Some people have personalities much better suited to one culture or the other, while other people are able to switch freely between them. I expect that groups can switch further toward intellectual honesty as a result of establishing intellectual trust, but that is not the only factor. Try to estimate the preferences of the individuals you're dealing with (while keeping in mind that people may surprise you later on).

Simultaneous Overconfidence and Underconfidence

20 abramdemski 03 June 2015 09:04PM

Follow-up to this and this on my personal blog. Prep for this meetup. Cross-posted on my blog.

Eliezer talked about cognitive bias, statistical bias, and inductive bias in a series of posts only the first of which made it directly into the LessWrong sequences as currently organized (unless I've missed them!). Inductive bias helps us leap to the right conclusion from the evidence, if it captures good prior assumptions. Statistical bias can be good or bad, depending in part on the bias-variance trade-off. Cognitive bias refers only to obstacles which prevent us from thinking well.

Unfortunately, as we shall see, psychologists can be quite inconsistent about how cognitive bias is defined. This created a paradox in the history of cognitive bias research. One well-researched and highly experimentally validated effect was conservatism, the tendency to give estimates too middling, or probabilities too near 50%. This relates especially to integration of information: when given evidence relating to a situation, people tend not to take it fully into account, as if they are stuck with their prior. Another highly-validated effect was overconfidence, relating especially to calibration: when people give high subjective probabilities like 99%, they are typically wrong with much higher frequency.

In real-life situations, these two contradict: there is no clean distinction between information integration tasks and calibration tasks. A person's subjective probability is always, in some sense, the integration of the information they've been exposed to. In practice, then, when should we expect other people to be under- or over- confident?

Simultaneous Overconfidence and Underconfidence

The conflict was resolved in an excellent paper by Ido Ereve et al which showed that it's the result of how psychologists did their statistics. Essentially, one group of psychologists defined bias one way, and the other defined it another way. The results are not really contradictory; they are measuring different things. In fact, you can find underconfidence or overconfidence in the same data sets by applying the different statistical techniques; it has little or nothing to do with the differences between information integration tasks and probability calibration tasks. Here's my rough drawing of the phenomenon (apologies for my hand-drawn illustrations): 


Overconfidence here refers to probabilities which are more extreme than they should be, here illustrated as being further from 50%. (This baseline makes sense when choosing from two options, but won't always be the right baseline to think about.) Underconfident subjective probabilities are associated with more extreme objective probabilities, which is why the slope tilts up in the figure. Overconfident similarly tilts down, indicating that the subjective probabilities are associated with less-extreme objective probabilities. Unfortunately, if you don't know how the lines are computed, this means less than you might think. Ido Ereve et al show that these two regression lines can be derived from just one data-set. I found the paper easy and fun to read, but I'll explain the phenomenon in a different way here by relating it to the concept of statistical bias and tails coming apart.

The Tails Come Apart

Everyone who has read Why the Tails Come Apart will likely recognize this image:

 

The idea is that even if X and Y are highly correlated, the most extreme X values and the most extreme Y values will differ. I've labelled the difference the "curse" after the optimizer's curse: if you optimize a criteria X which is merely correlated with the thing Y you actually want, you can expect to be disappointed.

 

Applying the idea to calibration, we can say that the most extreme subjective beliefs are almost certainly not the most extreme on the objective scale. That is: a person's most confident beliefs are almost certainly overconfident. A belief is not likely to have worked its way up to the highest peak of confidence by merit alone. It's far more likely that some merit but also some error in reasoning combined to yield high confidence. This sounds like the calibration literature, which found that people are generally overconfidant. What about underconfidence? By a symmetric argument, the points with the most extreme objective probabilities are not likely to be the same as those with the highest subjective belief; errors in our thinking are much more likely to make us underconfidant than overconfidant in those cases.


This argument tells us about extreme points, but not about the overall distribution. So, how does this explain simultaneous overconfidence and underconfidence? To understand that, we need to understand the statistics which psychologists used. We'll use averages rather than maximums, leading to a "soft version" which shows the tails coming apart gradually, rather than only at extreme ends.

Statistical Bias

Statistical bias is defined through the notion of an estimator. We have some quantity we want to know, X, and we use an estimator to guess what it might be. The estimator will be some calculation which gives us our estimate, which I will write as X^. An estimator is derived from noisy information, such as a sample drawn at random from a larger population. The difference between the estimator and the true value, X^-X, would ideally be zero; however, this is unrealistic. We expect estimators to have error, but systematic error is referred to as bias.

Given a particular value for X, the bias is defined as the expected value of X^-X, written EX(X^-X). An unbiased estimator is an estimator such that EX(X^-X)=0 for any value of X we choose.

Due to the bias-variance trade-off, unbiased estimators are not the best way to minimize error in general. However, statisticians still love unbiased estimators. It's a nice property to have, and in situations where it works, it has a more objective feel than estimators which use bias to further reduce error.

Notice, the definition of bias is taking fixed X; that is, it's fixing the quantity which we don't know. Given a fixed X, the unbiased estimator's average value will equal X. This is a picture of bias which can only be evaluated "from the outside"; that is, from a perspective in which we can fix the unknown X.

A more inside-view of statistical estimation is to consider a fixed body of evidence, and make the estimator equal the average unknown. This is exactly inverse to unbiased estimation:

 

In the image, we want to estimate unknown Y from observed X. The two variables are correlated, just like in the earlier "tails come apart" scenario. The average-Y estimator tilts down because good estimates tend to be conservative: because I only have partial information about Y, I want to take into account what I see from X but also pull toward the average value of Y to be safe. On the other hand, unbiased estimators tend to be overconfident: the effect of X is exaggerated. For a fixed Y, the average Y^ is supposed to equal Y. However, for fixed Y, the X we will get will lean toward the mean X (just as for a fixed X, we observed that the average Y leans toward the mean Y). Therefore, in order for Y^ to be high enough, it needs to pull up sharply: middling values of X need to give more extreme Y^ estimates.

If we superimpose this on top of the tails-come-apart image, we see that this is something like a generalization:

 

Wrapping It All Up

The punchline is that these two different regression lines were exactly what yields simultaneous underconfidence and overconfidence. The studies in conservatism were taking the objective probability as the independent variable, and graphing people's subjective probabilities as a function of that. The natural next step is to take the average subjective probability per fixed objective probability. This will tend to show underconfidence due to the statistics of the situation.

The studies on calibration, on the other hand, took the subjective probabilities as the independent variable, graphing average correct as a function of that. This will tend to show overconfidence, even with the same data as shows underconfidence in the other analysis.

 

From an individual's standpoint, the overconfidence is the real phenomenon. Errors in judgement tend to make us overconfident rather than underconfident because errors make the tails come apart so that if you select our most confident beliefs it's a good bet that they have only mediocre support from evidence, even if generally speaking our level of belief is highly correlated with how well-supported a claim is. Due to the way the tails come apart gradually, we can expect that the higher our confidence, the larger the gap between that confidence and the level of factual support for that belief.

This is not a fixed fact of human cognition pre-ordained by statistics, however. It's merely what happens due to random error. Not all studies show systematic overconfidence, and in a given study, not all subjects will display overconfidence. Random errors in judgement will tend to create overconfidence as a result of the statistical phenomena described above, but systematic correction is still an option.

 


I've also written a simple simulation of this. Julia code is here. If you don't have Julia installed or don't want to install it, you can run the code online at JuliaBox.

Meetup : West LA: Wait a minute... just what is bias?

1 abramdemski 29 May 2015 10:18PM

Discussion article for the meetup : West LA: Wait a minute... just what is bias?

WHEN: 03 June 2015 07:00:00PM (-0700)

WHERE: 11066 santa monica blvd, la, ca

How to Find Us: Go into this Del Taco. We will be in the back room if possible.

Parking is free in the lot out front or on the street nearby.

Discussion: The definition of bias in statistics is significantly different from the definition used on LessWrong. Why is this? What are the differences? It turns out that the scientific literature on biases uses multiple notions of bias. This muddies the research. Applying different statistical tools, psychologists can use the same dataset to "prove" that people are both overconfident and underconfident. This statistical phenomenon is closely related to why the tails come apart.

Recommended Reading:

No prior exposure to Less Wrong is required; this will be generally accessible.

Discussion article for the meetup : West LA: Wait a minute... just what is bias?

Meetup : West LA: Lightning Talks

1 abramdemski 23 May 2015 11:36PM

Discussion article for the meetup : West LA: Lightning Talks

WHEN: 27 May 2015 07:00:00PM (-0700)

WHERE: 11066 Santa Minica blvd, LA, CA

How to Find Us: Go into this Del Taco. We will be in the back room if possible.

Parking is free in the lot out front or on the street nearby.

Discussion: Lightning talks. Everyone is encouraged to make a 5-10 minute talk!

No prior exposure to Less Wrong is required; this will be generally accessible.

Discussion article for the meetup : West LA: Lightning Talks

Meetup : Los Angeles: Growth Mindset

1 abramdemski 14 May 2015 09:40PM

Discussion article for the meetup : Los Angeles: Growth Mindset

WHEN: 20 May 2015 07:00:00PM (-0700)

WHERE: 11066 Santa Monica blvd, LA, CA

How to Find Us: Go into this Del Taco. We will be in the back room if possible.

Parking is free in the lot out front or on the street nearby.

Discussion: Growth mindset. What is it? Is it good? Bad? Ugly? Will we all become growth-mindset zombies, constantly excusing our shortcomings with "yet... growth mindset"? Or shall we transcend the pedestrian constraints of life-so-far? Yvain's recent growth-mindset commentary will set the tone. We officially begin at 7pm, but people are often quite early.

Recommended Reading:

The first article in Yvain's four-part sequence will be the official reading; the others are listed here for convenience:

No prior exposure to Less Wrong is required; this will be generally accessible.

Discussion article for the meetup : Los Angeles: Growth Mindset

Meetup : West LA: Improv & Rationality

1 abramdemski 16 April 2015 08:47AM

Discussion article for the meetup : West LA: Improv & Rationality

WHEN: 22 April 2015 07:00:00PM (-0700)

WHERE: 11066 Santa Monica Blvd, Los Angeles, CA

How to Find Us: Go into this Del Taco. We will be in the back room if possible.

Parking is free in the lot out front or on the street nearby.

Discussion: Do improvisational acting and rationality go together? I think this link should be explored. Improvisation involves a special kind of relationship with System 1, which most people need to train in order to pull off well. As such, learning improv skills may improve fast reactions, particularly in social settings. Improv games are also good group bonding activities. We will play some improv games geared toward rationality skills, and discuss possible relationships between improv and rationality.

Recommended Reading:

No prior exposure to Less Wrong is required; this will be generally accessible.

Discussion article for the meetup : West LA: Improv & Rationality

Meetup : West LA: The Substitution Principle

1 abramdemski 27 February 2015 08:43AM

Discussion article for the meetup : West LA: The Substitution Principle

WHEN: 04 March 2015 07:00:00PM (-0800)

WHERE: 11066 Santa Monica Blvd, Los Angeles, CA 90025

How to Find Us: Go into this Del Taco. We will be in the back room if possible.

Parking is free in the lot out front or on the street nearby.

Discussion: When the brain is faced with a hard question, it automatically replaces it with an easier one to estimate the answer. This is good so far as it goes, but it can do this without being aware of the substitution, causing overconfidence or other problems. This can even combine with consistency effects to alter your opinions long-term. Noticing this can help us revise quickly-formed impressions. But wait -- is that really true? Did I only believe it just now because I substituted some easier question, like whether it sounded cool or whether the person talking about it is high-status? Probably. We'd better try to sort it out more carefully at the meetup.

Recommended Reading:

No prior exposure to Less Wrong is required; this will be generally accessible.

Discussion article for the meetup : West LA: The Substitution Principle

Meetup : West LA: Linguistic Relativity

1 abramdemski 14 November 2014 09:46AM

Discussion article for the meetup : West LA: Linguistic Relativity

WHEN: 19 November 2014 07:00:00PM (-0800)

WHERE: 11066 Santa Monica blvd, LA, CA, 90025

How to Find Us: Go into this Del Taco. We will be in the back room if possible.

Parking is free in the lot out front or on the street nearby.

Discussion: It seems generally useful to continue to grow the rationalist lexicon, as long as we keep in mind subtle distinctions. But is the Less Wrong style of crystallizing concepts with catchy names always even a good idea? To what extent does our use of these terms influence our thinking in ways that may be counter-productive to our goals as aspiring rationalists?

Recommended Reading:

No prior exposure to Less Wrong is required; this will be generally accessible.

Discussion article for the meetup : West LA: Linguistic Relativity

A List of Nuances

31 abramdemski 10 November 2014 05:02AM

Abram Demski and George Koleszarik


Much of rationality is pattern-matching. An article on lesswrong might point out a thing to look for. Noticing this thing changes your reasoning in some way. This essay is a list of things to look for. These things are all associated, but the reader should take care not to lump them together. Each dichotomy is distinct, and although the brain will tend to abstract them into some sort of yin/yang correlated mush, in reality they have a more complicated structure; some things may be similar, but if possible, try to focus on the complex interrelationships.

 

  1. Map vs. Territory

    1. Eliezer’s sequences use this as a jump-off point for discussion of rationality.

    2. Many thinking mistakes are map vs. territory confusions.

      1. A map and territory mistake is a mix-up of seeming vs being.

      2. Humans need frequent reminders that we are not omniscient.

  2. Cached Thoughts vs. Thinking

    1. This document is a list of cached thoughts.

  3. Clusters vs. Properties

    1. These words could be used in different ways, but the distinction I want to point at is that of labels we put on things vs actual differences in things.

    2. The mind projection fallacy is the fallacy of thinking a mental category (a “cluster”) is an actual property things have.

      1. If we see something as good for one reason, we are likely to attribute other good properties to it, as if it had inherent goodness. This is called the halo effect. (If we see something as bad and infer other bad properties as a result, it is referred to as the reverse-halo effect.)

    3. Categories are inference applicability heuristics; ruling X an instance of Y without expecting novel inferences is cargo cult classification.

  4. Syntax vs. Semantics

    1. The syntax is the physical instantiation of the map. The semantics is the way we are meant to read the map; that is, the intended relationship to the territory.

  5. Semantics vs. Pragmatics

    1. The semantics is the literal contents of a message, whereas the pragmatics is the intended result of conveying the message.

      1. An example of a message with no semantics and only pragmatics is a command, such as “Stop!”.

      2. Almost no messages lack pragmatics, and for good reason. However, if you seek truth in a discussion, it is important to foster a willingness to say things with less pragmatic baggage.

      3. Usually when we say things, we do so with some “point” which is beyond the semantics of our statement. The point is usually to build up or knock down some larger item of discussion. This is not inherently a bad thing, but has a failure mode where arguments are battles and statements are weapons, and the cleverer arguer wins.

    2. The meaning of a thing is the way you should be influenced by it.

  6. Object-level vs. Meta-level

    1. The difference between making a map and writing a book about map-making.

    2. A good meta-level theory helps get things right at the object level, but it is usually impossible to get things right at the meta level before before you’ve made significant progress at the object level.

  7. Seeming vs. Being

    1. We can only deal with how things seem, not how they are. Yet, we must strive to deal with things as they are, not as they seem.

      1. This is yet another reminder that we are not omniscient.

    2. If we optimize too hard for things which seem good rather than things which are good, we will get things which seem very good but which may only be somewhat good, or even bad.

    3. The dangerous cases are the cases where you do not notice there is a distinction.

      1. This is why humans need constant reminders that we are not omniscient.

    4. We must take care to notice the difference between how things seem to seem, and how they actually seem.

  8. Signal vs. Noise

    1. Not all information is equal. It is often the case that we desire certain sorts of information and desire to ignore other sorts.

    2. In a technical setting, this has to do with the error rate present in a communication channel; imperfections in the channel will corrupt some bits, making a need for redundancy in the message being sent.

    3. In a social setting, this is often used to refer to the amount of good information vs irrelevant information in a discussion. For example, letting a mediocre writer add material to a group blog might increase the absolute amount of good information, yet worsen the signal-to-noise ratio.

    4. Attention is a scarce resource; yes everyone has something to teach you, but many people are much more efficient sources of wisdom than others.

  9. Selection Effects

    1. Filtered evidence.

      1. In many situations, if we can present evidence to a Bayesian agent without the agent knowing that we are being selective, we can convince the agent of anything we like. For example, if I want to convince you that smoking causes obesity, I could find many people who became obese after they started smoking.

      2. The solution to this is for the Bayesian agent to model where the information is coming from. If you know I am selecting people based on this criteria, then you will not take it as evidence of anything, because the evidence has been cherry-picked.

      3. Most of the information you receive is intensely filtered. Nothing comes to your attention with a good conscience.

    2. The silent evidence problem.

      1. Selection bias need not be the result of purposeful interference as in cherry-picking. Often, an unrelated process may hide some of the evidence needed. For example, we hear far more about successful people than unsuccessful. It is tempting to look at successful people and attempt to draw conclusion about what it takes to be successful. This approach suffers from the silent evidence problem: we also need to look at the unsuccessful people and examine what is different about the two groups.

    3. Observer selection effects.

  10. What You Mean vs. What You Think You Mean

    1. Very often, people will say something and then that thing will be refuted. The common response to this is to claim you meant something slightly different, which is more easily defended.

      1. We often do this without noticing, making it dangerous for thinking. It is an automatic response generated by our brains, not a conscious decision to defend ourselves from being discredited. You do this far more often than you notice. The brain fills in a false memory of what you meant without asking for permission.

  11. What You Mean vs. What the Others Think You Mean

    1. The illusion of transparency.

    2. The double illusion of transparency.

    3. Wiio’s Laws

  12. What You Optimize vs. What You Think You Optimize

    1. Evolution optimizes for reproduction but in doing so creates animals with a variety of goals which are correlated with reproduction.

    2. Extrinsic motivation is weaker than intrinsic motivation.

    3. The people who value practice for its own sake do better than the people who only value being good at what they’re practicing.

    4. “Consequentialism is true, but virtue ethics is what works.”

  13. Stated Preferences vs. Revealed Preferences

    1. Revealed preferences are the preferences we can infer from your actions. These are usually different from your stated preferences.

      1. X is not about Y:

        1. Food isn’t about nutrition.

        2. Clothes aren’t about comfort.

        3. Bedrooms aren’t about sleep.

        4. Marriage isn’t about love.

        5. Talk isn’t about information.

        6. Laughter isn’t about humour.

        7. Charity isn’t about helping.

        8. Church isn’t about God.

        9. Art isn’t about insight.

        10. Medicine isn’t about health.

        11. Consulting isn’t about advice.

        12. School isn’t about learning.

        13. Research isn’t about progress.

        14. Politics isn’t about policy.

        15. Going meta isn’t about the object level.

        16. Language isn’t about communication.

        17. The rationality movement isn’t about epistemology.

      2. Everything is actually about signalling.

    2. Humans Are Not Automatically Strategic

      1. Never attribute to malice that which can be adequately explained by stupidity. The difference between stated preferences and revealed preferences does not indicate dishonest intent. We should expect the two to differ in the absence of a mechanism to align them.

      2. Hidden Motives vs. Innocent Failure

    3. People, ideas, and organizations respond to incentives.

      1. Evolution selects humans who have reproductively selfish behavioral tendencies, but prosocial and idealistic stated preferences.

        1. Near vs. Far

      2. Social forces select ideas for virality and comprehensibility as opposed to truth or even usefulness.

        1. Motte-and-bailey fallacy

      3. Organizations are by default bad at being strategic about their own survival, but the ones that survive are the ones you see.

  14. What You Achieve vs. What You Think You Achieve

    1. Most of the consequences of our actions are totally unknown to us.

    2. It is impossible to optimize without proper feedback.

  15. What You Optimize vs. What You Actually Achieve

    1. Consequentialism is more about expected consequences than actual consequences.

  16. What You Seem Like vs. What You Are

    1. You can try to imagine yourself from the outside, but no one has the full picture.

  17. What Other People Seem Like vs. What They Are

    1. When people assume that they understand others, they are wrong.

  18. What People Look Like vs. What They Think They Look Like

    1. People underestimate the gap between stated preferences and revealed preferences.

  19. What Your Brain Does vs. What You Think It Does

    1. You are running on corrupted hardware.

      1. The brain’s machinations are fundamentally social; it automatically does things like signal, save face, etc., which distort the truth.

    2. The reverse of stupidity is not intelligence.

      1. Knowing that you are running on corrupted hardware should cause skepticism about the outputs of your thought-processes. Yet, too much skepticism will cause you to stumble, particularly when fast thinking is needed.

        1. Producing a correct result plus justification is harder than producing only the correct result.

        2. Justifications are important, but the correct result is more important.

        3. Much of our apparent self-reflection is confabulation, generating plausible explanations after the brain spits out an answer.

        4. Example: doing quick mental math. If you are good at this, attempting to explicitly justify every step as you go would likely slow you down.

        5. Example: impressions formed over a long period of time. Wrong or right, it is unlikely that you can explicitly give all your reasons for the impression. Requiring your own beliefs to be justifiable would preempt impressions that require lots of experience and/or many non-obvious chains of subconscious inference.

        6. Impressions are not beliefs and they are always useful data.

  20. Clever Argument vs. Truth-seeking; The Bottom Line

    1. People believe what they want to believe.

      1. Believing X for some reason unrelated to X being true is referred to as motivated cognition.

      2. Giving a smart person more information and more methods of argument may actually make their beliefs less accurate, because you are giving them more tools to construct clever arguments for what they want to believe.

    2. Your actual reason for believing X determines how well your belief correlates with the truth.

      1. If you believe X because you want to, any arguments you make for X no matter how strong they sound are devoid of informational context about X and should properly be ignored by a truth-seeker.

    3. If you believe true things when doing so improves your life, that is no credit to you at all. Everyone does that.

  21. Lumpers vs. Splitters

    1. A lumper is a thinker who attempts to fit things into overarching patterns. A splitter is a thinker who makes as many distinctions as possible, recognizing the importance of being specific and getting the details right.

    2. Specifically, some people want big Wikipedia and TVTropes articles that discuss many things, and others want smaller articles that discuss fewer things.

    3. This list of nuances is a lumper attempting to think more like a splitter.

  22. Fox vs. Hedgehog

    1. “A fox knows many things, but a hedgehog knows One Big Thing.” Closely related to a splitter, a fox is a thinker whose strength is in a broad array of knowledge. A hedgehog is a thinker who, in contrast, has one big idea and applies it everywhere.

    2. The fox mindset is better for making accurate judgements, according to Tetlock.

  23. Traps vs. Gardens

    1. Well-kept gardens die by pacifism.

      1. Conversations tend to slide toward contentious and useless topics.

      2. Societies tend to decay.

      3. Systems in general work poorly or not at all.

      4. Thermodynamic equilibrium is entropic.

      5. Without proper institutions being already in place, it takes large amounts of constant effort and vigilance to stay out of traps.

    2. From the outside of a broken Molochian system it is easy to see how to fix. But it cannot be fixed from the inside.

Cross-posted to In Search Of Logic

View more: Prev | Next