Epistemic Luck

74 Alicorn 08 February 2010 12:02AM

Who we learn from and with can profoundly influence our beliefs. There's no obvious way to compensate.  Is it time to panic?

During one of my epistemology classes, my professor admitted (I can't recall the context) that his opinions on the topic would probably be different had he attended a different graduate school.

What a peculiar thing for an epistemologist to admit!

Of course, on the one hand, he's almost certainly right.  Schools have their cultures, their traditional views, their favorite literature providers, their set of available teachers.  These have a decided enough effect that I've heard "X was a student of Y" used to mean "X holds views basically like Y's".  And everybody knows this.  And people still show a distinct trend of agreeing with their teachers' views, even the most controversial - not an unbroken trend, but still an obvious one.  So it's not at all unlikely that, yes, had the professor gone to a different graduate school, he'd believe something else about his subject, and he's not making a mistake in so acknowledging...

But on the other hand... but... but...

But how can he say that, and look so undubiously at the views he picked up this way?  Surely the truth about knowledge and justification isn't correlated with which school you went to - even a little bit!  Surely he knows that!

continue reading »

False Majorities

35 JamesAndrix 03 February 2010 06:43PM

If a majority of experts agree on an issue, a rationalist should be prepared to defer to their judgment. It is reasonable to expect that the experts have superior knowledge and have considered many more arguments than a lay person would be able to. However, if experts are split into camps that reject each other's arguments, then it is rational to take their expert rejections into account. This is the case even among experts that support the same conclusion.

If 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject both A and B; then the majority Reject A, and the majority Reject B. G should not be treated as a reasonable majority view.

This should be clear if A is the koran and B is the bible.

continue reading »

Probability Space & Aumann Agreement

34 Wei_Dai 10 December 2009 09:57PM

The first part of this post describes a way of interpreting the basic mathematics of Bayesianism. Eliezer already presented one such view at http://lesswrong.com/lw/hk/priors_as_mathematical_objects/, but I want to present another one that has been useful to me, and also show how this view is related to the standard formalism of probability theory and Bayesian updating, namely the probability space.

The second part of this post will build upon the first, and try to explain the math behind Aumann's agreement theorem. Hal Finney had suggested this earlier, and I'm taking on the task now because I recently went through the exercise of learning it, and could use a check of my understanding. The last part will give some of my current thoughts on Aumann agreement.

continue reading »

Agree, Retort, or Ignore? A Post From the Future

35 Wei_Dai 24 November 2009 10:29PM

My friend Sasha, the software archaeology major, informed me the other day that there was once a widely used operating system, which, when it encountered an error, would often get stuck in a loop and repeatedly present to its user the options Abort, Retry, and Ignore. I thought this was probably another one of her often incomprehensible jokes, and gave a nervous laugh. After all, what interface designer would present "Ignore" as a possible user response to a potentially catastrophic system error without any further explanation?

Sasha quickly assured me that she wasn't joking. She told me that early 21st century humans were quite different from us. Not only did they routinely create software like that, they could even ignore arguments that contradicted their positions or pointed out flaws in their ideas, and did so publicly without risking any negative social consequences. Discussions even among self-proclaimed truth-seekers would often conclude, not by reaching a rational consensus or an agreement to mutually reassess positions and approaches, or even by an unilateral claim that further debate would be unproductive, but when one party simply fails to respond to the arguments or questions of another without giving any indication of the status of their disagreement.

At this point I was certain that she was just yanking my chain. Why didn't the injured party invoke rationality arbitration and get a judgment on the offender for failing to respond to a disagreement in a timely fashion, I asked? Or publicize the affair and cause the ignorer to become a social outcast? Or, if neither of these mechanisms existed or provided sufficient reparation, challenge the ignorer to a duel to the death? For that matter, how could those humans, only a few generations removed from us, not feel an intense moral revulsion at the very idea of ignoring an argument?

continue reading »

Light Arts

13 Alicorn 06 November 2009 03:54AM

tl;dr: It is worthwhile to convince people that they already, by their own lights, have reasons to believe true things, as this is faster, easier, nicer, and more effective than helping them create from scratch reasons to believe those things.

This is not part of the problem-solving sequence.  I do plan to finish that, but the last post is eluding me.

Related: Whatever it is I was thinking of here (let me know if you can dig up what it was).

Today, while waiting for a bus, I heard the two girls sitting on the bench next to mine talking about organ donation.  One said that she was thinking of ceasing to be an organ donor, because she'd heard that doctors don't try as hard to save donors in hopes of using their organs to save other lives.

My bus was approaching.  I didn't know the girl and could hardly follow up later with an arsenal of ironclad counterarguments.  There was no time, and probably no receptivity, to engage in a lengthy discussion of why this medical behavior wouldn't happen.  No chance to fire up my computer, try to get on the nearest wireless, and pull up empirical stats that say it doesn't happen.

So I chuckled and interjected, at a convenient gap in her ramble, "That's why you carry a blood donor card, too, so they think if you stay alive they'll keep getting blood from you!"

Some far-off potential tragic crisis averted?  Maybe.  She looked thoughtful, nodded, said that she did have a blood donor card, and that my suggestion made sense.  I boarded my bus and it carried me away.  I hope she's never hit by a cement truck.  I hope that if she is hit by a cement truck, a stupid rumor she heard once doesn't turn it into as complete a waste as it would have to be without the wonders of organ transplant.

continue reading »

Intuitive differences: when to agree to disagree

18 Kaj_Sotala 29 September 2009 07:56AM

Two days back, I had a rather frustrating disagreement with a friend. The debate rapidly hit a point where it seemed to be going nowhere, and we spent a while going around in circles before agreeing to change the topic. Yesterday, as I was riding the subway, things clicked. I suddenly realized not only what the disagreement had actually been about, but also what several previous disagreements we'd had were about. In all cases, our opinions and arguments had been grounded in opposite intuitions:

  • Kaj's intuition. In general, we can eventually learn to understand a phenomenon well enough to create a model that is flexible and robust. Coming up with the model is the hard part, but once that is done, adapting the general model to account for specific special cases is a relatively straightforward and basically mechanical process.
  • Friend's intuition. In general, there are some phenomena which are too complex to be accurately modeled. Any model you create for them is bristle and inflexible: adapting the general model to account for specific special cases takes almost as much work as creating the original model in the first place.

You may notice that these intuitions are not mutually exclusive in the strict sense. They could both be right, one of them covering certain classes of things and the other the remaining ones. And neither one is obviously and blatantly false - both have evidence supporting them. So the disagreement is not about which one is right, as such. Rather, it's a question of which one is more right, which is the one with broader applicability.

As soon as I realized this, I also realized two other things. One, whenever we would run into this difference in the future, we'd need to recognize it and stop that line of debate, for it wouldn't be resolved before the root disagreement had been solved. Two, actually resolving that core disagreement would take so much time and energy that it probably wouldn't be worth the effort.

continue reading »

The Finale of the Ultimate Meta Mega Crossover

31 Eliezer_Yudkowsky 20 September 2009 05:21PM

So I'd intended this story as a bit of utterly deranged fun, but it got out of control and ended up as a deep philosophical exploration, and now those of you who care will have to wade through the insanity.  I'm sorry.  I just can't seem to help myself.

I know that writing crossover fanfiction is considered one of the lower levels to which an author can sink.  Alas, I've always been a sucker for audacity, and I am the sort of person who couldn't resist trying to top the entire... but never mind, you can see for yourself.

Click on to read my latest story and first fanfiction, a Vernor Vinge x Greg Egan crackfic.

The Aumann's agreement theorem game (guess 2/3 of the average)

15 [deleted] 09 June 2009 07:29AM

I'd like to play a game with you. Send me, privately, a real number between 0 and 100, inclusive. (No funny business. If you say "my age", I'm going to throw it out.) The winner of this game is the person who, after a week, guesses the number closest to 2/3 of the average guess. I will reveal the average guess, and will confirm the winner's claims to have won, but I will reveal no specific guesses.

Suppose that you're a rational person. You also know that everyone else who plays this game is rational, you know that they know that, you know that they know that, and so on. Therefore, you conclude that the best guess is P. Since P is the rational guess to make, everyone will guess P, and so the best guess to make is P*2/3. This gives an equation that we can solve to get P = 0.

I propose that this game be used as a sort of test to see how well Aumann's agreement theorem applies to a group of people. The key assumption the theorem makes--which, as taw points out, is often overlooked--is that the group members are all rational and honest and also have common knowledge of this. This same assumption implies that the average guess will be 0. The farther from the truth this assumption is, the farther the average guess is going to be from 0, and the farther Aumann's agreement theorem is from applying to the group.

Update (June 20): The game is finished; sorry for the delay in getting the results. The average guess was about 13.235418197890148 (a number which probably contains as much entropy as its length), meaning that the winning guess is the one closest to 8.823612131926765. This number appears to be significantly below the number typical for groups of ordinary people, but not dramatically so. 63% of guesses were too low, indicating that people were overall slightly optimistic about the outcome (if you interpret lower as better). Anyway, I will notify the winner ahora mismo.

Dissenting Views

19 byrnema 26 May 2009 06:55PM

Occasionally, concerns have been expressed from within Less Wrong that the community is too homogeneous. Certainly the observation of homogeneity is true to the extent that the community shares common views that are minority views in the general population.

Maintaining a High Signal to Noise Ratio

The Less Wrong community shares an ideology that it is calling ‘rationality’(despite some attempts to rename it, this is what it is). A burgeoning ideology needs a lot of faithful support in order to develop true. By this, I mean that the ideology needs a chance to define itself as it would define itself, without a lot of competing influences watering it down, adding impure elements, distorting it. In other words, you want to cultivate a high signal to noise ratio.

For the most part, Less Wrong is remarkably successful at cultivating this high signal to noise ratio. A common ideology attracts people to Less Wrong, and then karma is used to maintain fidelity. It protects Less Wrong from the influence of outsiders who just don't "get it". It is also used to guide and teach people who are reasonably near the ideology but need some training in rationality. Thus, karma is awarded for views that align especially well with the ideology, align reasonably well, or that align with one of the directions that the ideology is reasonably evolving.

continue reading »

How to use "philosophical majoritarianism"

8 jimmy 05 May 2009 06:49AM

The majority of people would hold more accurate beliefs if they simply believed the majority. To state this in a way that doesn't risk information cascades, we're talking about averaging impressions and coming up with the same belief.

To the degree that you come up with different averages of the impressions, you acknowledge that your belief was just your impression of the average, and you average those metaimpressions and get closer to belief convergence. You can repeat this until you get bored, but if you're doing it right, your beliefs should get closer and closer to agreement, and you shouldn't be able to predict who is going to fall on which side.

Of course, most of us are atypical cases, and as good rationalists, we need to update on this information. Even if our impressions were (on average) no better than the average, there are certain cases where we know that the majority is wrong. If we're going to selectively apply majoritarianism, we need to figure out the rules for when to apply it, to whom, and how the weighting works.

This much I think has been said again and again. I'm gonna attempt to describe how.

continue reading »

View more: Prev | Next