Science has a large scale academic infrastructure to draw on, wherein people can propose research that they want to get done, and those who argue sufficiently persuasively that their research is solid and productive receive money and resources to conduct it.
You could make a system that produces more knowledge than modern science just by diverting a substantial portion of the national budget to fund it, so only the people proposing experiments that are too poorly designed to be useful don't get funding.
Besides which, improved rationality can't simply replace entire bodies of domain specific knowledge.
There are plenty of ways though, in which mainstream science is inefficient at producing knowledge, such as improper use of statistics, publication bias, and biased interpretation of results. There are ways to do better, and most scientists (at least those I've spoken to about it,) acknowledge this, but science is very significantly a social process which individual scientists have neither the power nor the social incentives to change.
"There are plenty of ways though, in which mainstream science is inefficient at producing knowledge, such as improper use of statistics, publication bias, and biased interpretation of results. There are ways to do better, and most scientists (at least those I've spoken to about it,) acknowledge this, but science is very significantly a social process which individual scientists have neither the power nor the social incentives to change."
I am an academic. Can you suggest three concrete ways for me to improve my knowledge production, which will not leave me worse off?
Edit, May 21, 2012: Read this comment by Yvain.
- Peter de Blanc
There's been a lot of talk here lately about how we need better contrarians. I don't agree. I think the Sequences got everything right and I agree with them completely. (This of course makes me a deranged, non-thinking, Eliezer-worshiping fanatic for whom the singularity is a substitute religion. Now that I have admitted this, you don't have to point it out a dozen times in the comments.) Even the controversial things, like:
There are two tiny notes of discord on which I disagree with Eliezer Yudkowsky. One is that I'm not so sure as he is that a rationalist is only made when a person breaks with the world and starts seeing everybody else as crazy, and two is that I don't share his objection to creating conscious entities in the form of an FAI or within an FAI. I could explain, but no one ever discusses these things, and they don't affect any important conclusions. I also think the sequences are badly-organized and you should just read them chronologically instead of trying to lump them into categories and sub-categories, but I digress.
Furthermore, I agree with every essay I've ever read by Yvain, I use "believe whatever gwern believes" as a heuristic/algorithm for generating true beliefs, and don't disagree with anything I've ever seen written by Vladimir Nesov, Kaj Sotala, Luke Muelhauser, komponisto, or even Wei Dai; policy debates should not appear one-sided, so it's good that they don't.
I write this because I'm feeling more and more lonely, in this regard. If you also stand by the sequences, feel free to say that. If you don't, feel free to say that too, but please don't substantiate it. I don't want this thread to be a low-level rehash of tired debates, though it will surely have some of that in spite of my sincerest wishes.
Holden Karnofsky said:
I can't understand this. How could the sequences not be relevant? Half of them were created when Eliezer was thinking about AI problems.
So I say this, hoping others will as well:
I stand by the sequences.
And with that, I tap out. I have found the answer, so I am leaving the conversation.
Even though I am not important here, I don't want you to interpret my silence from now on as indicating compliance.
After some degree of thought and nearly 200 comment replies on this article, I regret writing it. I was insufficiently careful, didn't think enough about how it might alter the social dynamics here, and didn't spend enough time clarifying, especially regarding the third bullet point. I also dearly hope that I have not entrenched anyone's positions, turning them into allied soldiers to be defended, especially not my own. I'm sorry.