NancyLebovitz comments on Navigating disagreement: How to keep your eye on the evidence - Less Wrong

37 Post author: AnnaSalamon 24 April 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (72)

You are viewing a single comment's thread. Show more comments above.

Comment author: AnnaSalamon 24 April 2010 10:50:03PM *  4 points [-]

Re: problem 1: Jelly bean number estimates are just like thermometer readings, except that the reading is in someone’s head, rather than their hand. So the obvious answer is to average everyone’s initial, solitary impressions, absent reason to expect one individual or another is an above-average (or below-average) estimator.

If your friends use lopsided weighting schemes in their second answers, should you re-update? This depends a lot on your friends.

  • Don't re-update from their answers if you think they don't understand the merits of averaging; you want to weight each person's raw impression evenly, not to overweight it based on how many others were randomly influenced by it (cf. information cascades: http://en.wikipedia.org/wiki/Information_cascade).
  • Do re-update if your friends understand the merits of averaging, such that their apparent over-weighting of a few peoples' datapoints suggests they know something you don't (e.g., perhaps your friend Julie has won past championships in jelly-bean estimation, and everyone but you knows it).
Comment author: NancyLebovitz 25 April 2010 12:59:46AM 3 points [-]

Since I know those people, I would weight their answers according to my best estimate of their skill at such tasks, and then average the whole group, including me.

Comment author: Peter_de_Blanc 27 April 2010 12:11:07AM 2 points [-]

Since I know those people, I would weight their answers according to my best estimate of their skill at such tasks, and then average the whole group, including me.

Doing this correctly can get pretty complicated. Basically, the more people you have, the less you should weight the low-quality estimates compared to the high-quality estimates.

For example, suppose that "good" thermometers are unbiased and "bad" thermometers are all biased in the same direction, but you don't know which direction.

If you have one thermometer which you know is good, and one which you're 95% sure is good, then you should weight both measurements about the same.

But if you have 10^6 thermometers which you know are good, and 10^6 which you're 95% sure are good, then you should pretty much ignore the possibly-bad ones.

Comment author: NancyLebovitz 27 April 2010 01:02:08AM 0 points [-]

Not that it matters tremendously, but I was thinking of the jelly bean problem.

Comment author: Jonathan_Graehl 26 April 2010 11:47:03PM 1 point [-]

What kind of weighted average?

Comment author: NancyLebovitz 26 April 2010 11:59:47PM 1 point [-]

My math isn't good enough to formalize it-- I'd do it by feel.

Comment author: Jonathan_Graehl 28 April 2010 12:05:02AM 1 point [-]

Drat - likewise.