Edit, May 21, 2012: Read this comment by Yvain.
Forming your own opinion is no more necessary than building your own furniture.
There's been a lot of talk here lately about how we need better contrarians. I don't agree. I think the Sequences got everything right and I agree with them completely. (This of course makes me a deranged, non-thinking, Eliezer-worshiping fanatic for whom the singularity is a substitute religion. Now that I have admitted this, you don't have to point it out a dozen times in the comments.) Even the controversial things, like:
- I think the many-worlds interpretation of quantum mechanics is the closest to correct and you're dreaming if you think the true answer will have no splitting (or I simply do not know enough physics to know why Eliezer is wrong, which I think is pretty unlikely but not totally discountable).
- I think cryonics is a swell idea and an obvious thing to sign up for if you value staying alive and have enough money and can tolerate the social costs.
- I think mainstream science is too slow and we mere mortals can do better with Bayes.
- I am a utilitarian consequentialist and think that if allow someone to die through inaction, you're just as culpable as a murderer.
- I completely accept the conclusion that it is worse to put dust specks in 3^^^3 people's eyes than to torture one person for fifty years. I came up with it independently, so maybe it doesn't count; whatever.
- I tentatively accept Eliezer's metaethics, considering how unlikely it is that there will be a better one (maybe morality is in the gluons?)
- "People are crazy, the world is mad," is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature.
- Edit, May 27, 2012: You know what? I forgot one: Gödel, Escher, Bach is the best.
There are two tiny notes of discord on which I disagree with Eliezer Yudkowsky. One is that I'm not so sure as he is that a rationalist is only made when a person breaks with the world and starts seeing everybody else as crazy, and two is that I don't share his objection to creating conscious entities in the form of an FAI or within an FAI. I could explain, but no one ever discusses these things, and they don't affect any important conclusions. I also think the sequences are badly-organized and you should just read them chronologically instead of trying to lump them into categories and sub-categories, but I digress.
Furthermore, I agree with every essay I've ever read by Yvain, I use "believe whatever gwern believes" as a heuristic/algorithm for generating true beliefs, and don't disagree with anything I've ever seen written by Vladimir Nesov, Kaj Sotala, Luke Muelhauser, komponisto, or even Wei Dai; policy debates should not appear one-sided, so it's good that they don't.
I write this because I'm feeling more and more lonely, in this regard. If you also stand by the sequences, feel free to say that. If you don't, feel free to say that too, but please don't substantiate it. I don't want this thread to be a low-level rehash of tired debates, though it will surely have some of that in spite of my sincerest wishes.
Holden Karnofsky said:
I believe I have read the vast majority of the Sequences, including the AI-foom debate, and that this content - while interesting and enjoyable - does not have much relevance for the arguments I've made.
I can't understand this. How could the sequences not be relevant? Half of them were created when Eliezer was thinking about AI problems.
So I say this, hoping others will as well:
I stand by the sequences.
And with that, I tap out. I have found the answer, so I am leaving the conversation.
Even though I am not important here, I don't want you to interpret my silence from now on as indicating compliance.
After some degree of thought and nearly 200 comment replies on this article, I regret writing it. I was insufficiently careful, didn't think enough about how it might alter the social dynamics here, and didn't spend enough time clarifying, especially regarding the third bullet point. I also dearly hope that I have not entrenched anyone's positions, turning them into allied soldiers to be defended, especially not my own. I'm sorry.
Wow. One of these is not like the others! (Hint: all but one have karma > 10,000.)
In all seriousness, being placed in that group has to count as one of the greatest honors of my internet life.
So I suppose I can't be totally objective when I sing the praises of this post. Nonetheless, it is a fact that I was planning to voice my agreement well before I reached the passage quoted above. So, let me confirm that I, too, "stand by" the Sequences (excepting various quibbles which are of scant relevance in this context).
I'll go further and note that I am significantly less impressed than most of LW by Holden Karnofsky's critique of SI, and suspect that the extraordinary affective glow being showered upon it is mostly the result of Holden's affiliation with GiveWell. Of course, that affective glow is so luminous (the post is at, what, like 200 now?) that to say I'm less impressed than everyone else isn't really to say much at all, and indeed I agree that Holden's critique was constructive and thoughtful (certainly by the standards of "the outside world", i.e. people who aren't LW regulars or otherwise thoroughly "infected" by the memes here). I just don't think it was particularly original -- similar points were made in the past by people like multifoliaterose and XiXDu (not to mention Wei Dai, etc.) -- and nor do I think it is particularly correct.
(To give one example, it's pretty clear to me that "Tool AI" is Oracle AI for relevant purposes, and I don't understand why this isn't clear to Holden also. One of the key AI-relevant lessons from the Sequences is that an AI should be thought of as an efficient cross-domain optimization process, and that the danger is inherent in the notion of "efficient optimization" itself, rather than residing in any anthropomorphic "agency" properties that the AI may or may not have.)
By the way, for all that I may increasingly sound like a Yudkowsky/SI "cultist" (which may perhaps have contributed to my inclusion in the distinguished list referenced above!), I still have a very hard time thinking of myself that way. In fact, I still feel like something of an outsider, because I didn't grow up on science fiction, was never on the SL4 mailing list, and indeed had never even heard of the "technological singularity" before I started reading Overcoming Bias sometime around 2006-07.
(Of course, given that Luke went from being a fundamentalist Christian to running the Singularity Institute in less time than I've been reading Yudkowsky, perhaps it's time for me to finally admit that I too have joined the club.)
There are many others, as well, but a full list seemed like an extremely terrible idea.