Edit, May 21, 2012: Read this comment by Yvain.
Forming your own opinion is no more necessary than building your own furniture.
There's been a lot of talk here lately about how we need better contrarians. I don't agree. I think the Sequences got everything right and I agree with them completely. (This of course makes me a deranged, non-thinking, Eliezer-worshiping fanatic for whom the singularity is a substitute religion. Now that I have admitted this, you don't have to point it out a dozen times in the comments.) Even the controversial things, like:
- I think the many-worlds interpretation of quantum mechanics is the closest to correct and you're dreaming if you think the true answer will have no splitting (or I simply do not know enough physics to know why Eliezer is wrong, which I think is pretty unlikely but not totally discountable).
- I think cryonics is a swell idea and an obvious thing to sign up for if you value staying alive and have enough money and can tolerate the social costs.
- I think mainstream science is too slow and we mere mortals can do better with Bayes.
- I am a utilitarian consequentialist and think that if allow someone to die through inaction, you're just as culpable as a murderer.
- I completely accept the conclusion that it is worse to put dust specks in 3^^^3 people's eyes than to torture one person for fifty years. I came up with it independently, so maybe it doesn't count; whatever.
- I tentatively accept Eliezer's metaethics, considering how unlikely it is that there will be a better one (maybe morality is in the gluons?)
- "People are crazy, the world is mad," is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature.
- Edit, May 27, 2012: You know what? I forgot one: Gödel, Escher, Bach is the best.
There are two tiny notes of discord on which I disagree with Eliezer Yudkowsky. One is that I'm not so sure as he is that a rationalist is only made when a person breaks with the world and starts seeing everybody else as crazy, and two is that I don't share his objection to creating conscious entities in the form of an FAI or within an FAI. I could explain, but no one ever discusses these things, and they don't affect any important conclusions. I also think the sequences are badly-organized and you should just read them chronologically instead of trying to lump them into categories and sub-categories, but I digress.
Furthermore, I agree with every essay I've ever read by Yvain, I use "believe whatever gwern believes" as a heuristic/algorithm for generating true beliefs, and don't disagree with anything I've ever seen written by Vladimir Nesov, Kaj Sotala, Luke Muelhauser, komponisto, or even Wei Dai; policy debates should not appear one-sided, so it's good that they don't.
I write this because I'm feeling more and more lonely, in this regard. If you also stand by the sequences, feel free to say that. If you don't, feel free to say that too, but please don't substantiate it. I don't want this thread to be a low-level rehash of tired debates, though it will surely have some of that in spite of my sincerest wishes.
Holden Karnofsky said:
I believe I have read the vast majority of the Sequences, including the AI-foom debate, and that this content - while interesting and enjoyable - does not have much relevance for the arguments I've made.
I can't understand this. How could the sequences not be relevant? Half of them were created when Eliezer was thinking about AI problems.
So I say this, hoping others will as well:
I stand by the sequences.
And with that, I tap out. I have found the answer, so I am leaving the conversation.
Even though I am not important here, I don't want you to interpret my silence from now on as indicating compliance.
After some degree of thought and nearly 200 comment replies on this article, I regret writing it. I was insufficiently careful, didn't think enough about how it might alter the social dynamics here, and didn't spend enough time clarifying, especially regarding the third bullet point. I also dearly hope that I have not entrenched anyone's positions, turning them into allied soldiers to be defended, especially not my own. I'm sorry.
My default answer for anything regarding Hanson is 'signaling'. How to fix science is a good start.
Science isn't just about getting things wrong or right, but an intricate signaling game. This is why most of what comes out of journals is wrong. Scientists are rewarded for publishing results, right or wrong, so they comb data for correlations which may or may not be relevant. (Statically speaking, if you comb data 20 different ways, you'll get at least 1 thing which shows a statistically significant correlation, just from sheer random chance.) Journals are rewarded for publishing sensational results, and not confirmations or even refutations (especially refutations of things they published in the first place). The rewards system does not set up coming with right answers, but coming up with answers that are sensational and cannot be easily refuted. Being right does make them hard to refute, which is why science is useful at all but that's not the only way things are made hard to refute.
An ideal bayesian unconstrained with signaling could completely outdo our current scientific system (as it could do for in all spheres of life). Even shifting our current system to be more bayesian by abandoning the journal system and creating pre-registration of scientific studies would be a huge upgrade. But science isn't about knowledge, knowledge is just a very useful byproduct we get from it.
But Grognor (not, as this comment read earlier, army1987) said that "we mere mortals can do better with Bayes", not that "an ideal bayesian unconstrained with signaling could completely outdo our current scientific system". Arguing, in response to cousin_it, that scientists are concerned with signalling makes the claim even stronger, and the question more compelling - "why aren't we doing better already?"