gwern comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 18 May 2012 07:42:55AM 6 points [-]

Caving to donors is inauspicious.

It's also a double-bind. If you do nothing, you are valuing donors at less than some random speculation which is unusually dubious even by LessWrong's standards, resting as it does on a novel speculative decision theory (acausal trade) whose most obvious requirement (implementing sufficiently similar algorithms) is beyond blatantly false when applied to humans and FAIs. (If you actually believe that SIAI is a good charity, pissing off donors over something like this is a really bad idea, and if you don't believe SIAI is a good charity, well, that's even more damning, isn't it?) And if you delete it, well, you get exactly this stupid mess which is still being dragged up years later.

I hope these impressions aren't accurate. But one thing seems for sure: Eliezer Yudkowsky is not a person for serious self-criticism. Has he admitted any significant intellectual error since he became a rationalist? [Serious question.]

Repudiating most of his long-form works like CFAI and LOGI and CEV isn't admission of error?

Personally, when he was writing the Sequences, I found it a little obnoxious how he kept saying "I was totally on the wrong track and mistaken before I was enlightened & came to understand Bayesian statistics, but now I have a chance of being less wrong" - once is enough, we get it already, I'm not that interested in your intellectual evolution.

Comment author: evand 19 May 2012 08:51:32PM 0 points [-]

Repudiating most of his long-form works like CFAI and LOGI and CEV isn't admission of error?

As someone who hasn't been around that long, it would be interesting to have links. I'm having trouble coming up with useful search terms.

Comment author: gwern 19 May 2012 09:15:00PM 0 points [-]

Creating Friendly AI, Levels of Organization in General Intelligence, and Coherent Extrapolated Volition.

Comment author: evand 19 May 2012 09:42:40PM 0 points [-]

Sorry, I wasn't clear. I meant links to the repudiations. I've read some of the material in CFAI and CEV, but not the retraction, and not yet any of LOGI.

Comment author: gwern 19 May 2012 09:45:43PM 0 points [-]

Oh. I don't remember, then, besides the notes about them being obsolete.