Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Fact Posts: How and Why

sarahconstantin 02 December 2016 06:55PM

The most useful thinking skill I've taught myself, which I think should be more widely practiced, is writing what I call "fact posts."  I write a bunch of these on my blog. (I write fact posts about pregnancy and childbirth here.)

To write a fact post, you start with an empirical question, or a general topic.  Something like "How common are hate crimes?" or "Are epidurals really dangerous?" or "What causes manufacturing job loss?"  

It's okay if this is a topic you know very little about. This is an exercise in original seeing and showing your reasoning, not finding the official last word on a topic or doing the best analysis in the world.

Then you open up a Google doc and start taking notes.

You look for quantitative data from conventionally reliable sources.  CDC data for incidences of diseases and other health risks in the US; WHO data for global health issues; Bureau of Labor Statistics data for US employment; and so on. Published scientific journal articles, especially from reputable journals and large randomized studies.

You explicitly do not look for opinion, even expert opinion. You avoid news, and you're wary of think-tank white papers. You're looking for raw information. You are taking a sola scriptura approach, for better and for worse.

And then you start letting the data show you things. 

You see things that are surprising or odd, and you note that. 

You see facts that seem to be inconsistent with each other, and you look into the data sources and methodology until you clear up the mystery.

You orient towards the random, the unfamiliar, the things that are totally unfamiliar to your experience. One of the major exports of Germany is valves?  When was the last time I even thought about valves? Why valves, what do you use valves in?  OK, show me a list of all the different kinds of machine parts, by percent of total exports.  

And so, you dig in a little bit, to this part of the world that you hadn't looked at before. You cultivate the ability to spin up a lightweight sort of fannish obsessive curiosity when something seems like it might be a big deal.

And you take casual notes and impressions (though keeping track of all the numbers and their sources in your notes).

You do a little bit of arithmetic to compare things to familiar reference points. How does this source of risk compare to the risk of smoking or going horseback riding? How does the effect size of this drug compare to the effect size of psychotherapy?

You don't really want to do statistics. You might take percents, means, standard deviations, maybe a Cohen's d here and there, but nothing fancy.  You're just trying to figure out what's going on.

It's often a good idea to rank things by raw scale. What is responsible for the bulk of deaths, the bulk of money moved, etc? What is big?  Then pay attention more to things, and ask more questions about things, that are big. (Or disproportionately high-impact.)

You may find that this process gives you contrarian beliefs, but often you won't, you'll just have a strongly fact-based assessment of why you believe the usual thing.  

There's a quality of ordinariness about fact-based beliefs. It's not that they're never surprising -- they often are. But if you do fact-checking frequently enough, you begin to have a sense of the world overall that stays in place, even as you discover new facts, instead of swinging wildly around at every new stimulus.  For example, after doing lots and lots of reading of the biomedical literature, I have sort of a "sense of the world" of biomedical science -- what sorts of things I expect to see, and what sorts of things I don't. My "sense of the world" isn't that the world itself is boring -- I actually believe in a world rich in discoveries and low-hanging fruit -- but the sense itself has stabilized, feels like "yeah, that's how things are" rather than "omg what is even going on."

In areas where I'm less familiar, I feel more like "omg what is even going on", which sometimes motivates me to go accumulate facts.

Once you've accumulated a bunch of facts, and they've "spoken to you" with some conclusions or answers to your question, you write them up on a blog, so that other people can check your reasoning.  If your mind gets changed, or you learn more, you write a follow-up post. You should, on any topic where you continue to learn over time, feel embarrassed by the naivety of your early posts.  This is fine. This is how learning works.

The advantage of fact posts is that they give you the ability to form independent opinions based on evidence. It's a sort of practice of the skill of seeing. They likely aren't the optimal way to get the most accurate beliefs -- listening to the best experts would almost certainly be better -- but you, personally, may not know who the best experts are, or may be overwhelmed by the swirl of controversy. Fact posts give you a relatively low-effort way of coming to informed opinions. They make you into the proverbial 'educated layman.'

Being an 'educated layman' makes you much more fertile in generating ideas, for research, business, fiction, or anything else. Having facts floating around in your head means you'll naturally think of problems to solve, questions to ask, opportunities to fix things in the world, applications for your technical skills.

Ideally, a group of people writing fact posts on related topics, could learn from each other, and share how they think. I have the strong intuition that this is valuable. It's a bit more active than a "journal club", and quite a bit more casual than "research".  It's just the activity of learning and showing one's work in public.

[Link] Expert Prediction Of Experiments

9 Yvain 29 November 2016 02:47AM

Tsuyoku Naritai! (I Want To Become Stronger)

111 Eliezer_Yudkowsky 27 March 2007 05:49PM

In Orthodox Judaism there is a saying:  "The previous generation is to the next one as angels are to men; the next generation is to the previous one as donkeys are to men."  This follows from the Orthodox Jewish belief that all Judaic law was given to Moses by God at Mount Sinai.  After all, it's not as if you could do an experiment to gain new halachic knowledge; the only way you can know is if someone tells you (who heard it from someone else, who heard it from God).  Since there is no new source of information, it can only be degraded in transmission from generation to generation.

Thus, modern rabbis are not allowed to overrule ancient rabbis.  Crawly things are ordinarily unkosher, but it is permissible to eat a worm found in an apple—the ancient rabbis believed the worm was spontaneously generated inside the apple, and therefore was part of the apple.  A modern rabbi cannot say, "Yeah, well, the ancient rabbis knew diddly-squat about biology.  Overruled!"  A modern rabbi cannot possibly know a halachic principle the ancient rabbis did not, because how could the ancient rabbis have passed down the answer from Mount Sinai to him?  Knowledge derives from authority, and therefore is only ever lost, not gained, as time passes.

When I was first exposed to the angels-and-donkeys proverb in (religious) elementary school, I was not old enough to be a full-blown atheist, but I still thought to myself:  "Torah loses knowledge in every generation.  Science gains knowledge with every generation.  No matter where they started out, sooner or later science must surpass Torah."

continue reading »

On the importance of Less Wrong, or another single conversational locus

73 AnnaSalamon 27 November 2016 05:13PM
Epistemic status: My actual best bet.  But I used to think differently; and I don't know how to fully explicate the updating I did (I'm not sure what fully formed argument I could give my past self, that would cause her to update), so you should probably be somewhat suspicious of this until explicated.  And/or you should help me explicate it.

It seems to me that:
  1. The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.

  2. Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle.  This sounds like hubris, but it is at this point at least partially a matter of track record.[1]

  3. To aid in solving this puzzle, we must probably find a way to think together, accumulatively.

continue reading »

A Return to Discussion

25 sarahconstantin 27 November 2016 01:59PM

Epistemic Status: Casual

It’s taken me a long time to fully acknowledge this, but people who “come from the internet” are no longer a minority subculture.  Senators tweet and suburban moms post Minion memes. Which means that talking about trends in how people socialize on the internet is not a frivolous subject; it’s relevant to how people interact, period.

There seems to have been an overall drift towards social networks over blogs and forums in general, and in particular things like:

  • the drift of commentary from personal blogs to “media” aggregators like The AtlanticVox, and Breitbart
  • the migration of fandom from LiveJournal to Tumblr
  • Facebook and Twitter as the places where links and discussions go

At the moment I’m not empirically tracking any trends like this, and I’m not confident in what exactly the major trends are — maybe in future I’ll start looking into this more seriously. Right now, I have a sense of things from impression and hearsay.

But one thing I have noticed personally is that people have gotten intimidatedby more formal and public kinds of online conversation.  I know quite a few people who used to keep a “real blog” and have become afraid to touch it, preferring instead to chat on social media.  It’s a weird kind of perfectionism — nobody ever imagined that blogs were meant to be masterpieces.  But I do see people fleeing towards more ephemeral, more stream-of-consciousness types of communication, or communication that involves no words at all (reblogging, image-sharing, etc.)  There seems to be a fear of becoming too visible as a distinctive writing voice.

For one rather public and hilarious example, witness Scott Alexander’s  flight from LessWrong to LiveJournal to a personal blog to Twitter and Tumblr, in hopes that somewhere he can find a place isolated enough that nobody will notice his insight and humor. (It hasn’t been working.)

What might be going on here?

Of course, there are pragmatic concerns about reputation and preserving anonymity. People don’t want their writing to be found by judgmental bosses or family members.  But that’s always been true — and, at any rate, social networking sites are often less anonymous than forums and blogs.

It might be that people have become more afraid of trolls, or that trolling has gotten worse. Fear of being targeted by harassment or threats might make people less open and expressive.  I’ve certainly heard many writers say that they’ve shut down a lot of their internet presence out of exhaustion or literal fear.  And I’ve heard serious enough horror stories that I respect and sympathize with people who are on their guard.

But I don’t think that really explains why one would drift towards more ephemeral media. Why short-form instead of long-form?  Why streaming feeds instead of searchable archives?  Trolls are not known for their patience and rigor.  Single tweets can attract storms of trolls.  So troll-avoidance is not enough of an explanation, I think.

It’s almost as though the issue were accountability.  

A blog is almost a perfect medium for personal accountability. It belongs to you, not your employer, and not the hivemind.  The archives are easily searchable. The posts are permanently viewable. Everything embarrassing you’ve ever written is there.  If there’s a comment section, people are free to come along and poke holes in your posts. This leaves people vulnerable in a certain way. Not just to trolls, but to critics.

You can preempt embarrassment by declaring that you’re doing something shitty anyhow. That puts you in a position of safety. I think that a lot of online mannerisms, like using all-lowercase punctuation, or using really self-deprecating language, or deeply nested meta-levels of meme irony, are ways of saying “I’m cool because I’m not putting myself out there where I can be judged.  Only pompous idiots are so naive as to think their opinions are actually valuable.”

Here’s another angle on the same issue.  If you earnestly, explicitly say what you think, in essay form, and if your writing attracts attention at all, you’ll attract swarms of earnest, bright-but-not-brilliant, mostly young white male, commenters, who want to share their opinions, because (perhaps naively) they think their contributions will be welcomed. It’s basically just “oh, are we playing a game? I wanna play too!”  If you don’t want to play with them — maybe because you’re talking about a personal or highly technical topic and don’t value their input, maybe because your intention was just to talk to your friends and not the general public, whatever — you’ll find this style of interaction aversive.  You’ll read it as sealioning. Or mansplaining.  Or“well, actually”-ing.

I think what’s going on with these kinds of terms is something like:

Author: “Hi! I just said a thing!”

Commenter: “Ooh cool, we’re playing the Discussion game! Can I join?  Here’s my comment!”  (Or, sometimes, “Ooh cool, we’re playing the Verbal Battle game!  I wanna play! Here’s my retort!”)

Author: “Ew, no, I don’t want to play with you.”

There’s a bit of a race/gender/age/educational slant to the people playing the “commenter” role, probably because our society rewards some people more than others for playing the discussion game.  Privileged people are more likely to assume that they’re automatically welcome wherever they show up, which is why others tend to get annoyed at them.

Privileged people, in other words, are more likely to think they live in a high-trust society, where they can show up to strangers and be greeted as a potential new friend, where open discussion is an important priority, where they can trust and be trusted, since everybody is playing the “let’s discuss interesting things!” game.

The unfortunate reality is that most of the world doesn’t look like that high-trust society.

On the other hand, I think the ideal of open discussion, and to some extent the past reality of internet discussion, is a lot more like a high-trust society where everyone is playing the “discuss interesting things” game, than it is like the present reality of social media.

A lot of the value generated on the 90’s and early 2000’s internet was built on people who were interested in things, sharing information about those things with like-minded individuals.  Think of the websites that were just catalogues of information about someone’s obsessions. (I remember my family happily gathering round the PC when I was a kid, to listen to all the national anthems of the world, which some helpful net denizen had collated all in one place.)  There is an enormous shared commons that is produced when people are playing the “share info about interesting stuff” game.  Wikipedia. StackExchange. It couldn’t have been motivated by pure public-spiritedness — otherwise people wouldn’t have produced so much free work.  There are lower motivations: the desire to show off how clever you are, the desire to be a know-it-all, the desire to correct other people.  And there are higher motivations — obsession, fascination, the delight of infodumping. This isn’t some higher plane of civic virtue; it’s just ordinary nerd behavior.

But in ordinary nerd behavior, there are some unusual strengths.  Nerds are playing the “let’s have discussions!” game, which means that they’re unembarrassed about sharing their take on things, and unembarrassed about holding other people accountable for mistakes, and unembarrassed about being held accountable for mistakes.  It’s a sort of happy place between perfectionism and laxity.  Nobody is supposed to get everything right on the first try; but you’re supposed to respond intelligently to criticism. Things will get poked at, inevitably.  Poking is friendly behavior. (Which doesn’t mean it’s not also aggressive behavior.  Play and aggression are always intermixed.  But it doesn’t have to be understood as scary, hostile, enemy.)

Nerd-format discussions are definitely not costless. You get discussions of advanced/technical topics being mobbed by clueless opinionated newbies, or discussions of deeply personal issues being hassled by clueless opinionated randos.  You get endless debate over irrelevant minutiae. There are reasons why so many people flee this kind of environment.

But I would say that these disadvantages are necessary evils that, while they might be possible to mitigate somewhat, go along with having a genuinely public discourse and public accountability.

We talk a lot about social media killing privacy, but there’s also a way in which it kills publicness, by allowing people to curate their spaces by personal friend groups, and retreat from open discussions.   In a public square, any rando can ask an aristocrat to explain himself.  If people hide from public squares, they can’t be exposed to Socrates’ questions.

I suspect that, especially for people who are even minor VIPs (my level of online fame, while modest, is enough to create some of this effect), it’s tempting to become less available to the “public”, less willing to engage with strangers, even those who seem friendly and interesting.  I think it’s worth fighting this temptation.  You don’t get the gains of open discussion if you close yourself off.  You may not capture all the gains yourself, but that’s how the tragedy of the commons works; a bunch of people have to cooperate and trust if they’re going to build good stuff together.  And what that means, concretely, on the margin, is taking more time to explain yourself and engage intellectually with people who, from your perspective, look dumb, clueless, crankish, or uncool.

Some of the people I admire most, including theoretical computer scientist Scott Aaronson, are notable for taking the time to carefully debunk crackpots (and offer them the benefit of the doubt in case they are in fact correct.)  Is this activity a great ROI for a brilliant scientist, from a narrowly selfish perspective?  No. But it’s praiseworthy, because it contributes to a truly open discussion. If scientists take the time to investigate weird claims from randos, they’re doing the work of proving that science is a universal and systematic way of thinking, not just an elite club of insiders.  In the long run, it’s very important that somebody be doing that groundwork.

Talking about interesting things, with friendly strangers, in a spirit of welcoming open discussion and accountability rather than fleeing from it, seems really underappreciated today, and I think it’s time to make an explicit push towards building places online that have that quality.

In that spirit, I’d like to recommend LessWrong to my readers. For those not familiar with it, it’s a discussion forum devoted to things like cognitive science, AI, and related topics, and, back in its heyday a few years ago, it was suffused with the nerdy-discussion-nature. It had all the enthusiasm of late-night dorm-room philosophy discussions — except that some of the people you’d be having the discussions with were among the most creative people of our generation.  These days, posting and commenting is a lot sparser, and the energy is gone, but I and some other old-timers are trying to rekindle it. I’m crossposting all my blog posts there from now on, and I encourage everyone to check out and join the discussions there.

(Cross-posted from my blog, https://srconstantin.wordpress.com/)

Industry Matters 2: Partial Retraction

6 sarahconstantin 23 November 2016 05:08PM

Epistemic status: still tentative

Some useful comments on the last post on manufacturing have convinced me of some weaknesses in my argument.

First of all, I think I was wrong that most manufacturing job loss is due to trade. There are several economic analyses, using different methods, that come to the conclusion that a minority of manufacturing jobs are lost to trade, with most of the remainder lost to labor productivity increases.

Second of all, I want to refine my argument about productivity.

Labor productivity and multifactor productivity in manufacturing, as well as output, have grown steadily throughout the 20th century, and continue to grow today. The claim “we are making more things than ever before in America” is true.

It’s also true that manufacturing employment has dropped slowly through the 70’s and 80’s until today.  This is plausibly due to improvements in labor productivity.

However, the striking, very rapid decline of manufacturing employment post-2000, in which half of all manufacturing jobs were lost in fifteen years, looks like a different phenomenon. And it does correspond temporally to a drop in output and productivity growth.  It also corresponds temporally to the establishment of normal trade relations with China, and there is more detailed evidence that there’s a causal link between job loss and competition with China.

My current belief is that the long-term secular decline in manufacturing employment is probably just due to the standard phenomenon where better efficiency leads to employing fewer workers in a field, the same reason that there are fewer farmers than there used to be.

However, something weird seems to have happened in 2000, something that hurt productivity.  It might be trade.  It might be some kind of “stickiness” effect where external shocks are hard to recover from, because there’s a lot of interdependence in industry, and if you lose one firm you might lose the whole ecosystem.  It might be some completely different thing. But I believe that there is a post-2000 phenomenon which is not adequately explained by just “higher productivity causes job loss.”

Most manufacturing job loss is due to productivity; only a minority is due to trade

David Autor‘s economic analysis concluded that trade with China contributed 16% of the US manufacturing employment decline between 1990 and 2000, 26% of the decline between 2000 and 2007, and 21% over the full period.  He came to this conclusion by looking at particular manufacturing regions in the US, looking at their exposure to Chinese imports in the local industry, and seeing how much employment declined post-2000.  Regions with more import exposure had higher job loss.

Researchers at Ball State University also concluded that trade was responsible for a minority of manufacturing job loss during the period 2000-2010: 13.4% due to trade, and 87.8% due to manufacturing productivity.  This was calculated using import numbers and productivity numbers from the U.S. Census and the Bureau of Labor Statistics, under the simple model that the change in employment is a linear combination of the change in domestic consumption, the change in imports, the change in exports, and the change in labor productivity.

Josh Bivens of the Economic Policy Institute, using the same model as the Ball State economists, computes that imports were responsible for 21.15% of job losses between 2000 and 2003, while productivity growth was responsible for 84.32%.

Justin Pierce and Peter Schott of the Federal Reserve Board observe that industries where the 2000 normalization of trade relations with China would have increased imports the most were those that had the most job loss. Comparing job loss in above-median impact-from-China industries vs. below-median impact-from-China industries, the difference in job loss accounts for about 29% of the drop in manufacturing employment from 2000 to 2006.

I wasn’t able to find any economic analyses that argued that trade was responsible for a majority of manufacturing job losses. It seems safe to conclude that most manufacturing job loss is due to productivity gains, not trade.

It’s also worth noting that NAFTA doesn’t seem to have cost manufacturing jobs at all.

Productivity and output are growing, but have slowed since 2000.

Real output in manufacturing is growing, and has been since the 1980’s, but there are some signs of a slowdown.

Researchers at the Economic Policy Institute claim that slowing manufacturing productivity and output growth around 2000 led to the sharp drop in employment.  If real value added in manufacturing had continued growing at the rate it had been in 2000, it would be 1.4x as high today.

Manufacturing output aside from computers and electronic products has beenslow-growing since the 90’s.  The average annual output growth rate, 1997-2015, in manufacturing, was 12% in computers, but under 4% in all other manufacturing sectors. (The next best was motor vehicles, at 3% output growth rate.)

US motor vehicle production has been growing far more slowly than global motor vehicle production.

Here are some BLS numbers on output in selected manufacturing industries:

As an average over the time period, this growth rate represents about 2.5%-3.5% annual growth, which is roughly in line with GDP growth.  So manufacturing output growth averaged since the late 80’s isn’t unusually bad.

Labor productivity has also been rising in various industries:

However, when we look at the first and second derivatives of output and productivity, the picture looks worse.

Multifactor productivity seems to have flattened in the mid-2000’s, and multifactor productivity growth has dropped sharply.

Manufacturing labor productivity growth is positive, but lower than it’s been historically, at about 0.45% in 2014, and a 4-year moving average of 2.1%, compared to 3-4% growth in the 90’s.

Multifactor productivity in durable goods is down in absolute terms since about 2000 and hasn’t fully recovered.

(Multifactor productivity refers to the returns to labor and capital. If multifactor productivity isn’t growing, then while we may be investing in more capital, it’s not necessarily better capital.)

Labor productivity growth in electronics is dropping and has just become negative.

Labor productivity growth in the auto industry is staying flat at about 2%.

Manufacturing output growth has dropped very recently, post-recession, to about 0. From the 80’s to the present, it was about steady, at roughly 1%.  By contrast, global manufacturing growth is much higher: 6.5% in China, 1.9% globally.  And US GDP growth is about 2.5% on average.

In some industries, like auto parts and textiles,  raw output has dropped since 2000. (Although, arguably, these are lower-value industries and the US is moving up the value chain.)

Looking back even farther, there is a slowdown in multifactor productivity growth in manufacturing, beginning in the early 70’s. Multifactor productivity grew by 1.5% annually from 1949-1973, and only by 0.3% in 1973-1983.  Multifactor productivity today isn’t unprecedentedly low, but it’s dropping to the levels of stagnation we saw in the 1970’s.

Basically, recent labor productivity is positive but not growing and in some cases dropping; output is growing slower than GDP; and multifactor productivity is dropping. This points to there being something to worry about.

What might be going on?

Economist Jared Bernstein argues that automation doesn’t explain the whole story of manufacturing job loss. If you exclude the computer industry, manufacturing output is only about 8% higher than it was in 1997, and lowerthan it was before the Great Recession.  The growth in manufacturing output has been “anemic.”  He says that factory closures have large spillover effects. Shocks like the rise of China, or a global glut of steel in the 1980’s, lead to US factory closures; and then when demand recovers, the US industries don’t.

This model also fits with the fact that proximity matters a lot.  It’s valuable, for knowledge-transfer reasons, to build factories near suppliers.  So if parts manufacturing moves overseas, the factories that assemble those parts are likely to relocate as well. It’s also valuable, due to shipping costs, to locate manufacturing near to expensive-to-ship materials like steel or petroleum.  And, also as a result of shipping costs, it’s valuable to locate manufacturing in places with good transportation infrastructure. So there can be stickiness/spillover effects, where, once global trade makes it cheaper to make parts and raw materials in China, there’s incentives pushing higher-value manufacturing to relocate there as well.

It doesn’t seem to be entirely coincidence that the productivity slowdown coincided with the opening of trade with China. The industries where employment dropped most after 2000 were those where the risk of tariffs on Chinese goods dropped the most.

However, this story is still consistent with the true claim that most lost manufacturing jobs are lost to productivity, not trade. Multifactor productivity may be down and output and labor productivity may be slowing, but output is still growing, and that growth is still big enough to drive most job loss.

Crossposted from my blog: https://srconstantin.wordpress.com/2016/11/23/industry-matters-2-partial-retraction/

[Link] Less costly signaling

12 paulfchristiano 22 November 2016 09:11PM

[Link] Costs are not benefits

5 philh 03 November 2016 09:32PM

The True Prisoner's Dilemma

54 Eliezer_Yudkowsky 03 September 2008 09:34PM

It occurred to me one day that the standard visualization of the Prisoner's Dilemma is fake.

The core of the Prisoner's Dilemma is this symmetric payoff matrix:

1: C 1:  D
2: C (3, 3) (5, 0)
2: D (0, 5) (2, 2)

Player 1, and Player 2, can each choose C or D.  1 and 2's utility for the final outcome is given by the first and second number in the pair.  For reasons that will become apparent, "C" stands for "cooperate" and D stands for "defect".

Observe that a player in this game (regarding themselves as the first player) has this preference ordering over outcomes:  (D, C) > (C, C) > (D, D) > (C, D).

D, it would seem, dominates C:  If the other player chooses C, you prefer (D, C) to (C, C); and if the other player chooses D, you prefer (D, D) to (C, D).  So you wisely choose D, and as the payoff table is symmetric, the other player likewise chooses D.

If only you'd both been less wise!  You both prefer (C, C) to (D, D).  That is, you both prefer mutual cooperation to mutual defection.

The Prisoner's Dilemma is one of the great foundational issues in decision theory, and enormous volumes of material have been written about it.  Which makes it an audacious assertion of mine, that the usual way of visualizing the Prisoner's Dilemma has a severe flaw, at least if you happen to be human.

continue reading »

The Epistemic Prisoner's Dilemma

33 MBlume 18 April 2009 05:36AM

Let us say you are a doctor, and you are dealing with a malaria epidemic in your village. You are faced with two problems. First, you have no access to the drugs needed for treatment. Second, you are one of two doctors in the village, and the two of you cannot agree on the nature of the disease itself. You, having carefully tested many patients, being a highly skilled, well-educated diagnostician, have proven to yourself that the disease in question is malaria. Of this you are >99% certain. Yet your colleague, the blinkered fool, insists that you are dealing with an outbreak of bird flu, and to this he assigns >99% certainty.

Well, it need hardly be said that someone here is failing at rationality. Rational agents do not have common knowledge of disagreements etc. But... what can we say? We're human, and it happens.

So, let's say that one day, OmegaDr. House calls you both into his office and tells you that he knows, with certainty, which disease is afflicting the villagers. As confident as you both are in your own diagnoses, you are even more confident in House's abilities. House, however, will not tell you his diagnosis until you've played a game with him. He's going to put you in one room and your colleague in another. He's going to offer you a choice between 5,000 units of malaria medication, and 10,000 units of bird-flu medication. At the same time, he's going to offer your colleague a choice between 5,000 units of bird-flu meds, and 10,000 units of malaria meds.

continue reading »

View more: Next