Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Memetic Tribalism

43 [deleted] 14 February 2013 03:03AM

Related: politics is the mind killer, other optimizing

When someone says something stupid, I get an urge to correct them. Based on the stories I hear from others, I'm not the only one.

For example, some of my friends are into this rationality thing, and they've learned about all these biases and correct ways to get things done. Naturally, they get irritated with people who haven't learned this stuff. They complain about how their family members or coworkers aren't rational, and they ask what is the best way to correct them.

I could get into the details of the optimal set of arguments to turn someone into a rationalist, or I could go a bit meta and ask: "Why would you want to do that?"

Why should you spend your time correcting someone else's reasoning?

One reason that comes up is that it's valuable for some reason to change their reasoning. OK, when is it possible?

  1. You actually know better than them.

  2. You know how to patch their reasoning.

  3. They will be receptive to said patching.

  4. They will actually change their behavior if the accept the patch.

It seems like it should be rather rare for those conditions to all be true, or even to be likely enough for the expected gain to be worth the cost, and yet I feel the urge quite often. And I'm not thinking it through and deciding, I'm just feeling an urge; humans are adaptation executors, and this one seems like an adaptation. For some reason "correcting" people's reasoning was important enough in the ancestral environment to be special-cased in motivation hardware.

I could try to spin an ev-psych just-so story about tribal status, intellectual dominance hierarchies, ingroup-outgroup signaling, and whatnot, but I'm not an evolutionary psychologist, so I wouldn't actually know what I was doing, and the details don't matter anyway. What matters is that this urge seems to be hardware, and it probably has nothing to do with actual truth or your strategic concerns.

It seems to happen to everyone who has ideas. Social justice types get frustrated with people who seem unable to acknowledge their own privilege. The epistemological flamewar between atheists and theists rages continually across the internet. Tech-savvy folk get frustrated with others' total inability to explore and use Google. Some aspiring rationalists get annoyed with people who refuse to decompartmentalize or claim that something is in a separate magisteria.

Some of those border on being just classic blue vs green thinking, but from the outside, the rationality example isn't all that different. They all seem to be motivated mostly by "This person fails to display the complex habits of thought that I think are fashionable; I should {make fun | correct them | call them out}."

I'm now quite skeptical that my urge to correct reflects an actual opportunity to win by improving someone's thinking, given that I'd feel it whether or not I could actually help, and that it seems to be caused by something else.

The value of attempting a rationality-intervention has gone back down towards baseline, but it's not obvious that the baseline value of rationality interventions is all that low. Maybe it's a good idea, even if there is a possible bias supporting it. We can't win just by reversing our biases; reversed stupidity is not intelligence.

The best reason I can think of to correct flawed thinking is if your ability to accomplish your goals directly depends on their rationality. Maybe they are your business partner, or your spouse. Someone specific and close who you can cooperate with a lot. If this is the case, it's near the same level of urgency as correcting your own.

Another good reason (to discuss the subject at least) is that discussing your ideas with smart people is a good way to make your ideas better. I often get my dad to poke holes in my current craziness, because he is smarter and wiser than me. If this is your angle, keep in mind that if you expect someone else to correct you, it's probably not best to go in making bold claims and implicitly claiming intellectual dominance.

An OK reason is that creating more rationalists is valuable in general. This one is less good than it first appears. Do you really think your comparative advantage right now is in converting this person to your way of thinking? Is that really worth the risk of social friction and expenditure of time and mental energy? Is this the best method you can think of for creating more rationalists?

I think it is valuable to raise the sanity waterline when you can, but using methods of mass instruction like writing blog posts, administering a meetup, or launching a whole rationality movement is a lot more effective than arguing with your mom. Those options aren't for everybody of course, but if you're into waterline-manipulation, you should at least be considering strategies like them. At least consider picking a better time.

Another reason that gets brought up is that turning people around you into rationalists is instrumental in a selfish way, because it makes life easier for you. This one is suspect to me, even without the incentive to rationalize. Did you also seriously consider sabotaging people's rationality to take advantage of them? Surely that's nearly as plausible a-priori. For what specific reason did your search process rank cooperation over predation? 

I'm sure there are plenty of good reasons to prefer cooperation, but of course no search process was ever run. All of these reasons that come to mind when I think of why I might want to fix someone's reasoning are just post-hoc rationalizations of an automatic behavior. The true chain of cause-and-effect is observe->feel->act; no planning or thinking involved, except where it is necessary for the act. And that feeling isn't specific to rationality, it affects all mental habits, even stupid ones.

Rationality isn't just a new memetic orthodoxy for the cool kids, it's about actually winning. Every improvement requires a change. Rationalizing strategic reasons for instinctual behavior isn't change, it's spending your resources answering questions with zero value of information. Rationality isn't about what other people are doing wrong; it's about what you are doing wrong.

I used to call this practice of modeling other people's thoughts to enforce orthodoxy on them "incorrect use of empathy", but in terms of ev-psych, it may be exactly the correct use of empathy. We can call it Memetic Tribalism instead.

(I've ignored the other reason to correct people's reasoning, which is that it's fun and status-increasing. When I reflect on my reasons for writing posts like this, it turns out I do it largely for the fun and internet status points, but I try to at least be aware of that.)

A brief history of ethically concerned scientists

68 Kaj_Sotala 09 February 2013 05:50AM

For the first time in history, it has become possible for a limited group of a few thousand people to threaten the absolute destruction of millions.

-- Norbert Wiener (1956), Moral Reflections of a Mathematician.


Today, the general attitude towards scientific discovery is that scientists are not themselves responsible for how their work is used. For someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.

But this is not necessarily the attitude that we should encourage. As technology becomes more powerful, it also becomes more dangerous. Throughout history, many scientists and inventors have recognized this, and taken different kinds of action to help ensure that their work will have beneficial consequences. Here are some of them.

This post is not arguing that any specific approach for taking responsibility for one's actions is the correct one. Some researchers hid their work, others refocused on other fields, still others began active campaigns to change the way their work was being used. It is up to the reader to decide which of these approaches were successful and worth emulating, and which ones were not.

Pre-industrial inventors

… I do not publish nor divulge [methods of building submarines] by reason of the evil nature of men who would use them as means of destruction at the bottom of the sea, by sending ships to the bottom, and sinking them together with the men in them.

-- Leonardo da Vinci


People did not always think that the benefits of freely disseminating knowledge outweighed the harms. O.T. Benfey, writing in a 1956 issue of the Bulletin of the Atomic Scientists, cites F.S. Taylor’s book on early alchemists:

Alchemy was certainly intended to be useful .... But [the alchemist] never proposes the public use of such things, the disclosing of his knowledge for the benefit of man. …. Any disclosure of the alchemical secret was felt to be profoundly wrong, and likely to bring immediate punishment from on high. The reason generally given for such secrecy was the probable abuse by wicked men of the power that the alchemical would give …. The alchemists, indeed, felt a strong moral responsibility that is not always acknowledged by the scientists of today.


With the Renaissance, science began to be viewed as public property, but many scientists remained cautious about the way in which their work might be used. Although he held the office of military engineer, Leonardo da Vinci (1452-1519) drew a distinction between offensive and defensive warfare, and emphasized the role of good defenses in protecting people’s liberty from tyrants. He described war as ‘bestialissima pazzia’ (most bestial madness), and wrote that ‘it is an infinitely atrocious thing to take away the life of a man’. One of the clearest examples of his reluctance to unleash dangerous inventions was his refusal to publish the details of his plans for submarines.

Later Renaissance thinkers continued to be concerned with the potential uses of their discoveries. John Napier (1550-1617), the inventor of logarithms, also experimented with a new form of artillery. Upon seeing its destructive power, he decided to keep its details a secret, and even spoke from his deathbed against the creation of new kinds of weapons.

But only concealing one discovery pales in comparison to the likes of Robert Boyle (1627-1691). A pioneer of physics and chemistry and possibly the most famous for describing and publishing Boyle’s law, he sought to make humanity better off, taking an interest in things such as improved agricultural methods as well as better medicine. In his studies, he also discovered knowledge and made inventions related to a variety of potentially harmful subjects, including poisons, invisible ink, counterfeit money, explosives, and kinetic weaponry. These ‘my love of Mankind has oblig’d me to conceal, even from my nearest Friends’.

continue reading »

Playing the student: attitudes to learning as social roles

9 Swimmer963 23 November 2012 02:56AM

This is a post about something I noticed myself doing this year, although I expect I’ve been doing it all along. It’s unlikely to be something that everyone does, so don’t be surprised if you don’t find this applies to you. It's also an exercise in introspection, i.e. likely to be inaccurate. 

Intro

If I add up all the years that I’ve been in school, it amounts to about 75% of my life so far–and at any one time, school has probably been the single activity that I spend the most hours on. I would still guess that 50% or less of my general academic knowledge was actually acquired in a school setting, but school has tests, and grades at the end of the year, and so has provided most of the positive/negative reinforcement related to learning. The ‘attitudes to learning’ that I’m talking about apply in a school setting, not when I’m learning stuff for fun.


Role #1: Overachiever

Up until seventh grade, I didn’t really socialize at school–but once I started talking to people, it felt like I needed a persona, so that I could just act ‘in character’ instead of having to think of things to say from scratch. Being a stereotypical overachiever provided me with easy material for small talk–I could talk about schoolwork to other people who were also overachievers.

Years later, after acquiring actual social skills in the less stereotyped environments of part-time work and university, I play the overachiever more as a way of reducing my anxiety in class. (School was easy for me up until my second year of nursing school, when we started having to do scary things like clinical placements and practical exams, instead of nice safe things like written exams.) If I can talk myself into always being curious and finding everything exciting and interesting and cool I want to do that!!!, I can’t find everything scary–or, at the very least, to other people it looks like I’m not scared.

 

Role #2: Too Cool for School

This isn’t one I’ve played too much, aside from my tendency to put studying for exams as maybe my fourth priority–after work, exercise, and sleep–and still having an A average. (I will still skip class to work a shift at the ER any day, but that doesn’t count–working there is almost more educational than class, in my mind.) As one of my LW Ottawa friends pointed out, there’s a sort of counter-signalling involved in being a ‘lazy’ student–if you can still pull off good grades without doing any work, you must be smart, so people notice this and respect it.

My brother is the prime example of this. He spent grades 9 through 11 alternately sleeping and playing on his iPhone in class, and maintained an average well over 80%. In grade 12 he started paying attention in class and occasionally doing homework, and graduated with, I believe, an average over 95%. He had a reputation throughout the whole school–as someone who was very smart, but also cool.

 
Role #3: Just Don’t Fail Me!

Weirdly enough, it wasn’t at school that I originally learned this role. As a teenager, I did competitive swimming. The combination of not having outstanding talent for athletics, plus the anxiety that came from my own performance depending on how fast the other swimmers were, made this about 100 times more terrifying than school. At some point I developed a weird sort of underconfidence, the opposite of using ‘Overachiever’ to deal with anxiety. My mind has now created, and made automatic, the following subroutine: “when an adult takes you aside to talk to you about anything related to ‘living up to your potential’, start crying.” I’m not sure what the original logic behind this was: get the adult to stop and pay attention to me? Get them to take me more seriously? Get them to take me less seriously? Or just the fact that I couldn’t stomach the fact of being ordinarily below average at something–I had to be in some way differently below average. Who knows if there was much logic behind it at all?  

Having this learned role comes back to bite me now, sometimes–the subroutine gets triggered in any situation that feels too much like my swim coach’s one-on-one pre-competition pep talks. Taekwondo triggers it once in a while. Weirdly enough, being evaluated in clinicals triggers it too–this didn’t originally make much sense, since it’s not competitive in the sense of ‘she wins, I lose.’ I think the associative chain there is through lifeguarding courses–the hands-on evaluation aspect used to be fairly terrifying for my younger self, and my monkey brain puts clinicals and lab evaluations into that category, as opposed to the nice safe category of written exams, where I can safely be Too Cool for School and still get good grades.  

The inconvenience of thinking about school this way really jumped out at me this fall. I started my semester of clinicals with a prof who was a) spectacularly non-intimidating compared to some others I’ve had, and b) who liked me from the very start, basically because I raised my hand a lot and answered questions intelligently during our more classroom-y initial orientation. I was all set up for a semester of playing ‘Overachiever’, until, quite near the beginning of the semester, I was suddenly expected to do something that I found scary, and I was tired and scared of looking confident but being wrong, and I fell back on ‘Just Don’t Fail Me!’ My prof was, understandably, shocked and confused as to why I was suddenly reacting to her as ‘the scary adult who has the power to pass or fail me and will definitely fail me unless I’m absolutely perfect, so I had better grovel.’ I think she actually felt guilty about whatever she had done to intimidate me–which was nothing.

Since then I’ve been doing fine, progressing at the same rate as all the other students (maybe it says something about me that this isn’t very satisfying, and even kind of feels like failure in itself...I would like to be progressing faster). That is, until I’m alone with my prof and she tries to give me a pep talk about how I’m obviously very smart and doing fine, so I just need to improve my confidence. Then I start crying. At this point, I’m pretty sure she thinks I should be on anti-depressants–which is problematic in itself, but could be more problematic if she was the kind of prof who might fail me in my clinical for a lack of confidence. There’s no objective reason why I can’t hop back into Overachiever mode, since I managed both my clinicals last spring entirely in that mode. But part of my brain protests: ‘she’s seen you being insecure! She wouldn’t believe you as an overachiever, it would be too out of character!’ It starts to make sense once I stop seeing this behaviour as 'my learning style' and recognize it as a social role that I, at some point, probably subconsciously, decided I ought to play.

 

Conclusion

The main problem seems to be that my original mental models for social interaction–with adults, mostly–are overly simplistic and don’t cut reality at the joints. That’s not a huge problem in itself–I have better models now and most people I meet now say I have good communication skills, although I sometimes still come across as ‘odd’. The problem is that every once in a while, a situation happens, pattern recognition jumps into play, and whoa, I’m playing ‘Just Don’t Fail Me’. (It’s happened with the other two roles too, but they’re is less problematic.) Then I can’t get out of that role easily, because my social monkey brain is telling me it would be out of character and the other person would think it was weird. This is despite the fact that I no longer consciously care if I come across as weird, as long as people think I’m competent and trustworthy and nice, etc.

Just noticing this has helped a little–I catch my monkey brain and remind it ‘hey, this situation looks similar to Situation X that you created a stereotyped response for, but it’s not Situation X, so how about we just behave like a human being as usual’. Reminding myself that the world doesn’t break down into ‘adults’ and ‘children’–or, if it did once, I’m now on the other side of the divide–also helps. Failing that, I can consciously try to make sure I get into the 'right’ role–Overachiever or Too Cool For School, depending on the situation–and make that my default. 

Has anyone else noticed themselves doing something similar? I’m wondering if there are other roles that I play, maybe more subtly, at work or with friends. 

 

Your existence is informative

2 KatjaGrace 30 June 2012 02:46PM

Cross Posted from Overcoming Bias

Suppose you know that there are a certain number of planets, N. You are unsure about the truth of a statement Q. If Q is true, you put a high probability on life forming on any given arbitrary planet. If Q is false, you put a low probability on this. You have a prior probability for Q. So far you have not taken into account your observation that the planet you are on has life. How do you update on this evidence, to get a posterior probability for Q? Since your model just has a number of planets in it, with none labeled as 'this planet', you can't update directly on 'there is life on this planet', by excluding worlds where 'this planet' doesn't have life. And you can't necessarily treat 'this' as an arbitrary planet, since you wouldn't have seen it if it didn't have life.

I have an ongoing disagreement with an associate who suggests that you should take 'this planet has life' into account by conditioning on 'there exists a planet with life'. That is,

P(Q|there is life on this planet) = P(Q|there exists a planet with life).

Here I shall explain my disagreement.

Nick Bostrom argues persuasively that much science would be impossible if we treated 'I observe X' as 'someone observes X'. This is basically because in a big world of scientists making measurements, at some point somebody will make most mistaken measurements. So if all you know when you measure the temperature of a solution to be 15 degrees is that you are not in a world where nobody ever measures its temperature to be 15 degrees, this doesn't tell you much about the temperature.

You can add other apparently irrelevant observations you make at the same time - e.g. that the table is blue chipboard - in order to make your total observations less likely to arise once in a given world (at its limit, this is the suggestion of FNC). However it seems implausible that you should make different inferences from taking a measurement when you can also see a detailed but irrelevant picture at the same time than those you make with limited sensory input. Also the same problem re-emerges if the universe is supposed to be larger. Given that the universe is thought to be very, very large, this is a problem. Not to mention, it seems implausible that the size of the universe should greatly affect probabilistic judgements made about entities which are close to independent from most of the universe.

So I think Bostrom's case is good. However I'm not completely comfortable arguing from the acceptability of something that we do (science) back to the truth of the principles that justify it. So I'd like to make another case against taking 'this planet has life' as equivalent evidence to 'there exists a planet with life'.

Evidence is what excludes possibilities. Seeing the sun shining is evidence against rain, because it excludes the possible worlds where the sky is grey, which include most of those where it is raining. Seeing a picture of the sun shining is not much evidence against rain, because it excludes worlds where you don't see such a picture, which are about as likely to be rainy or sunny as those that remain are.

Receiving the evidence 'there exists a planet with life' means excluding all worlds where all planets are lifeless, and not excluding any other worlds. At first glance, this must be different from 'this planet has life'. Take any possible world where some other planet has life, and this planet has no life. 'There exists a planet with life' doesn't exclude that world, while 'this planet has life' does. Therefore they are different evidence.

At this point however, note that the planets in the model have no distinguishing characteristics. How do we even decide which planet is 'this planet' in another possible world? There needs to be some kind of mapping between planets in each world, saying which planet in world A corresponds to which planet in world B, etc. As far as I can tell, any mapping will do, as long as a given planet in one possible world maps to at most one planet in another possible world. This mapping is basically a definition choice.

So suppose we use a mapping where in every possible world where at least one planet has life, 'this planet' corresponds to one of the planets that has life. See the below image.

Squares are possible worlds, each with two planets. Pink planets have life, blue do not. Define 'this planet' as the circled one in each case. Learning that there is life on this planet is equal to learning that there is life on some planet.

Now learning that there exists a planet with life is the same as learning that this planet has life. Both exclude the far righthand possible world, and none of the other possible worlds. What's more, since we can change the probability distribution we end up with, just by redefining which planets are 'the same planet' across worlds, indexical evidence such as 'this planet has life' must be horseshit.

Actually the last paragraph was false. If in every possible world which contains life, you pick one of the planets with life to be 'this planet', you can no longer know whether you are on 'this planet'. From your observations alone, you could be on the other planet, which only has life when both planets do. The one that is not circled in each of the above worlds. Whichever planet you are on, you know that there exists a planet with life. But because there's some probability of you being on the planet which only rarely has life, you have more information than that. Redefining which planet was which didn't change that.

Perhaps a different definition of 'this planet' would get what my associate wants? The problem with the last was that it no longer necessarily included the planet we are on. So what about we define 'this planet' to be the one you are on, plus a life-containing planet in all of the other possible worlds that contain at least one life-containing planet. A strange, half-indexical definition, but why not? One thing remains to be specified - which is 'this' planet when you don't exist? Let's say it is chosen randomly.

Now is learning that 'this planet' has life any different from learning that some planet has life? Yes. Now again there are cases where some planet has life, but it's not the one you are on. This is because the definition only picks out planets with life across other possible worlds, not this one. In this one, 'this planet' refers to the one you are on. If you don't exist, this planet may not have life. Even if there are other planets that do. So again, 'this planet has life' gives more information than 'there exists a planet with life'.

You either have to accept that someone else might exist when you do not, or you have to define 'yourself' as something that always exists, in which case you no longer know whether you are 'yourself'. Either way, changing definitions doesn't change the evidence. Observing that you are alive tells you more than learning that 'someone is alive'.

Decision Theories: A Semi-Formal Analysis, Part III

23 orthonormal 14 April 2012 07:34PM

Or: Formalizing Timeless Decision Theory

Previously:

0. Decision Theories: A Less Wrong Primer
1. The Problem with Naive Decision Theory
2. Causal Decision Theory and Substitution

WARNING: The main result of this post, as it's written here, is flawed. I at first thought it was a fatal flaw, but later found a fix. I'm going to try and repair this post, either by including the tricky bits, or by handwaving and pointing you to the actual proofs if you're curious. Carry on!

Summary of Post: Have you ever wanted to know how (and whether) Timeless Decision Theory works? Using the framework from the last two posts, this post shows you explicitly how TDT can be implemented in the context of our tournament, what it does, how it strictly beats CDT on fair problems, and a bit about why this is a Big Deal. But you're seriously going to want to read the previous posts in the sequence before this one.

We've reached the frontier of decision theories, and we're ready at last to write algorithms that achieve mutual cooperation in Prisoner's Dilemma (without risk of being defected on, and without giving up the ability to defect against players who always cooperate)! After two substantial preparatory posts, it feels like it's been a long time, hasn't it?

But look at me, here, talking when there's Science to do...

continue reading »

When is further research needed?

0 RichardKennaway 17 June 2011 03:01PM

Here's a simple theorem in utility theory that I haven't seen anywhere. Maybe it's standard knowledge, or maybe not.

TL,DR: More information is never a bad thing.

The theorem proved below says that before you make an observation, you cannot expect it to decrease your utility, but you can sometimes expect it to increase your utility.  I'm ignoring the cost of obtaining the additional data, and any losses consequential on the time it takes. These are real considerations in any practical situation, but they are not the subject of this note.

continue reading »