In response to comment by curious on My Way
Comment author: Eliezer_Yudkowsky 17 April 2009 06:56:03PM 11 points [-]

I can't stand watching sports. I don't have a problem with that either.

I think if we lived in a world balanced between genders, where men thought of themselves as men and women thought of themselves as women to around the same degree, then women would have no more difficult a time departing from gender average than men do.

In response to comment by Eliezer_Yudkowsky on My Way
Comment author: jyasskin 29 December 2010 10:42:25PM 13 points [-]

Within-group differences are larger than between-group differences in most of these domains, so I'd rather make it easier for both groups to deviate from their group tendencies than to try to identify more group tendencies that it will be hard to deviate from.

Comment author: jyasskin 29 December 2010 06:15:59PM *  4 points [-]

I don't see what I thought were the obvious answers, so here they are. The foundations are elsewhere on the site, but they seemed missing from this list.

Reputational: Expect Bayesian masters to participate in other scientific fields. People who make more discoveries in other fields get more street cred among rationalists, especially when they can explain how rationalism helped them make the discoveries. Obviously, this is a long-term process that doesn't lend itself to improving the art quickly.

Experimental: This one's a two-step process. First, ask a large collection of university professors to insert one lie into each of their lectures a'la http://www.overcomingbias.com/2008/02/my-favorite-lia.html (mentioned in another comment). Have them note which students discover each lie, but don't have that count for any sort of grade (to prevent gaming). Second, sort students randomly into the experimental rationality classes, and/or have the classes "fill up" (with a lottery for seats) to provide a control. Look for whether there's a difference in lie-detection rates between the differently-taught groups.

Experimental #2, much longer term: Track the career outcomes of the students who took each different rationality class. See whether there's a difference in winning between the groups.

Comment author: JGWeissman 01 April 2009 05:21:06AM 3 points [-]

With the Bayes-score being always negative, I don't see what incentive one would have to submit a mistake report. I think it would be better to test for better than, for example, 90% confidence, by awarding 1 point for a correct report and deducting 9 points for an incorrect report. This achieves the goal of detecting ability to detect bad arguments. Measuring calibration would have to be a seperate test.

Comment author: jyasskin 29 December 2010 05:41:46PM 1 point [-]

Treat not submitting a mistake report as the "I have no idea" claim: that you've assigned a probability of "mistakes/total emails" to this particular email being a mistake.