1 min read

14

I had an idea- has it been done before, and if not shouldn't somebody try to do it? I live in Melbourne where LWers aren't organised, but perhaps the New York branch or some other group could try this? (IF it hasn't been done, that is- I haven't seen in mentioned, which is why I'm checking)

Idea:

-Recruit a non-rationalist scientist (or better group of scientists) either by persuading friends or perhaps getting some money together to pay somebody (or finding a helpful volenteer) or perhaps several.

-Have THEM come up with a series of tests to test rationalists relative to a control group.

If 'sucessful' (in the sense of a significant difference between rationalists and non-rationalists, the result we on LW would presumably predict), it would provide enough evidence to justify a formal test (there would likely be a few weaknesses, such as recruiting friends of rationalists), which could then (again, assuming such a result) persuade scientists to become rationalists (benefits should be obvious on a bit of thought) and generate publicity. If a 'failure' or indecisive, it justifies a serious reevaluation of site methodology.

New Comment
17 comments, sorted by Click to highlight new comments since:
[-]Giles120

Sorry for the dumb question, but: test rationalists on what relative to a control group?

Not at all a dumb question.

I assume the idea is to test "Rationality".

Basically, how good are LW readers at being right about tricky stuff relative to similarly intelligent folk who aren't LW readers?

I would think this would encompass (non-exhaustively) decision-making under risk and uncertainty, inference from incomplete information, making accurate predictions, and being well-calibrated.

Precisely what you would test (which may have been your point) is a very good question and is not at all obvious.

My original idea was to test the extent of cognitive biases (known to be possible as scientific testing discovered them in the first place), but that works too- either would serve the idea's purposes.

Please choose more informative titles in future.

Regular reality checks are definitely a good idea.

How rigorous would a test like this need to be in order to yield useful information about whether we should run a more serious followup?

I'd try to answer that, but my knowledge of the scientific method isn't too good.

On a very similiar point though, a sufficently rigorous test to persuade scientists (rationally or irrationally) to conduct a professional test would be sufficent- as such a test would be the optimal result in theory if it could come about some other way, I think it would be a pragmatic choice.

Two comments:

  1. the hard part about this seems to be finding a control group. I'm pretty sure that the average LW reader would have done better on any test you can find that's supposed to measure "rationality" before they'd read any of the site, where do you get a group of "people who haven't read LW yet, but are the sort of person who might read LW"?

  2. if we did manage to find a control group, what's supposed to be the benefit of asking a non-LWer to decide on the tests? This is supposed to be an experiment to actually find out stuff about the universe: we have just as much interest, if not more, in its results being accurate as the average person.

Regarding 2, the reason to have a non-LWer is presumably because we are more likely to be biased and thus introduce subtle biases that favor LWians. Don't underestimate the human capacity for self-deception.

You have to compare that to the baseline chance of someone being biased, though. It might be that the amount of bias wanting LessWrong to show actual gains brings is less than the gap between a LWer and the average.

You also have to consider that a typical scientist is less biased at work (as shown by the fact that their scientific tests tend to be more accurate than, say, their life choices or political opinions) and is used to rigorous standards in such things.

It may be, but would you trust any such test run by another non-mainstream group, if they used one of their own to adjudicate the result?

Not from the outside, no.

As suggested in the OP, they have to create the tests, not only evaluate their results. Even if average LWers want to find out whether LW memes are actually helpful, they are likely to be biased in choosing the criteria of rationality. For example, a test made by a LWer would more likely include a Newcombesque question where one-boxing would be classified as the rational answer, and since one-boxers are certainly more prevalent among LWers than in nearly any other group, the results would show that LW memes improve rationality. But the OP is not interested in testing whether LW memes improve LW-style extended rationality (it would be quite weird if they didn't) but a practical, real-life relevant rationality. We are not impartial judges when it comes to determining the boundary between these two.

Or more generally, you can never be too careful about possible biases. Not seeing a reason for a self-serving bias is a pretty weak evidence for its non-existence.

Probably we should have two or three different controls. One group of average humans, one group of scientists, day traders, and entrepreneurs, and one group of nerds on the internet.

where do you get a group of "people who haven't read LW yet, but are the sort of person who might read LW"?

They're called newbies. People who just recently started reading LW. Measure the improvement in rationality for the control group and for the experimental, newbie group.

[-]ewang-20

Actually, the hard part may be finding a scientist willing to risk eir career and past work to admit that ey isn't a rationalist.

Yes, the implicit identification of "LessWrong" and "rationalist" is a local trope only.

Actually, the hard part may be finding a scientist willing to risk eir career and past work to admit that ey isn't a rationalist.

This seems off to me. First of all, LW rationality is a specific brand of that which focuses on pro-actively dealing with cognitive biases. Second of all, the interest that Eliezer and others have in the Singularity and related issues creates a serious status hit in the general population. Third, one doesn't need someone who actively identifies as not a rationalist, just someone with no prior connection to LW.