NancyLebovitz comments on How about testing our ideas? - Less Wrong

31 [deleted] 14 September 2012 10:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 14 September 2012 02:40:34PM *  23 points [-]

I'm happy to see a push for increased empiricism and scientific effort on LW. But... I wish there were more focus on the word "how," and less focus on the word 'we.'

Three articles come to mind: To Lead You Must Stand Up, First, Try to Make it to the Mean, and Money: The Unit of Caring. (Only the first part of the second article will be directly relevant, but the latter parts are indirectly relevant.)

That is:

First, there's insufficient focus on what concrete steps you are taking to move the culture in that direction. (Writing blog posts exhorting action does not count for much. Do you think The Neglected Virtue of Scholarship would have shifted community actions as much if lukeprog hadn't followed it up by writing posts with massive reference lists?) The reference to yourmorals.org is fine, but what made that site important was a particular feature, not its goal or its structure. If you've thought of a similar feature that someone (ideally you) could code up, great! I will send as much karma as I can towards the person that makes that happen. But this is even more general than a call for better / easier rationality tests and exercises, and thus even less likely to cause concrete action.

Second, it really does help to be a specialist and know the prior art in a subject. The central lesson of experimental psychology is probably "designing experiments that test what you want them to test is really, really hard." If there's a specialist out there researching this stuff, then I would be happy to take part in any experiments they post on LW, and I suspect that many others here would be as well. If CFAR moves from advocacy and education to research (on cognitive science, not education), I again expect that I'd be willing to participate and so would others.

Similarly to trying to push the boundaries of life extension rather than simply making past the mean of the life expectancy, trying to push the boundaries of science when you don't know where those boundaries are is fundamentally mistaken. Knowing what experiments have already been done and what they actually show should be a major input into what you test. The Neglected Virtue of Scholarship calls out Eliezer on exactly that- "er, your n=1 theory of procrastination seems to disagree with n>1 research." I remember being fascinated by all the variants of the Wason selection task described in Thinking and Deciding. I had previously only been familiar with the basic one, and the implications of both the original and the variations are far stronger than the implications of just the original.

(Note that one of the strengths of LW might be that you gather a bunch of neurologically similar people, who can share with each other knowledge and experience not useful to the general population. I have the same experience of procrastination as Eliezer, and learning that someone else out there has that issue is valuable knowledge. Given general human neurodiversity, looking for things that help everyone is probably going to be less useful than narrowing your view.)

Third, why try to train citizen scientists when we could make better use of specialist scientists? Gary Drescher posted here, but hasn't in over a year. What would make LW valuable enough to him for him to post here? XiXiDu managed to attract the attention of some experts in AI. What would make LW valuable enough to them for them to post here?

I agree with training citizen scientists in the sense of training empiricists (who will then naturally apply science to their lives). I think that LW having a culture of supporting science- both with dollars and volunteerism- would be better than not. But I don't see you addressing the engineering problems with moving from one culture to the other, instead of just signalling that you would prefer the other culture.

Comment author: [deleted] 14 September 2012 02:55:54PM *  8 points [-]

To Lead You Must Stand Up

A little over a week ago me and two other LWers started doing research on the possibilities of an online rationality class. The goal the project is to have an official proposal as well as a beta version ready in a few months. Besides this hopefully helping spread friendly memes and giving publicity, we aim to figure out if this can be used as a tool to make progress on the difficult problems of teaching and measuring rationality. Best way to figure it out is to try and use it that way as we iterate.

I name dropped the proposal in the OP but since we started so recently it felt odd writing an article about that first.

Third, why try to train citizen scientists when we could make better use of specialist scientists? Gary Drescher posted here, but hasn't in over a year. What would make LW valuable enough to him for him to post here? XiXiDu managed to attract the attention of some experts in AI. What would make LW valuable enough to them for them to post here?

I kind of meant this under "attracting the right crowd" but I should have made it explicit.

But I don't see you addressing the engineering problems with moving from one culture to the other, instead of just signalling that you would prefer the other culture.

The reason for this is that I'm unsure how to do this and didn't want to get people locked into my plan to change it. Also I hoped "come up with stuff that needs testing!" would show me if I was wrong or not on the insufficient emphasis on empiricism in the community.