metaphysicist comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: metaphysicist 18 May 2012 05:10:09AM *  3 points [-]

Is deleting one post such an issue to get worked up over? Or is this just discussed because it's the best criticism one can come up with besides "he's a high school dropout who hasn't yet created an AI and so must be completely wrong"?

Like JoshuaZ, I hadn't known a donor was involved. What's the big deal? People donote to SIAI because they trust Eliezer Yudkowsky's integrity and intellect. So it's natural to ask whether he's someone you can count on to deliver the truth. Caving to donors is inauspicious.

In a related vein, I also found disturbing that Eliezer Yudkowsky repeated his claim that that Loosemoore guy "lied." Having had years to cool off, he still hasn't summoned the humility to admit he stretched the evidence for Loosemoore's deceitfulness: Loosemoore is obviously a cognitive scientist.

These two examples paint a picture of Eliezer Yudkowsky as a person subject to strong personal loyalties and animosities that exceed his dedication to the truth. In the first incident, his loyalty to a donor induced him to suppress information; in the Loosemoore incident, his longstanding animosity to Loosemoore made him unable to adjust his earlier opinion.

I hope these impressions aren't accurate. But one thing seems for sure: Eliezer Yudkowsky is not a person for serious self-criticism. Has he admitted any significant intellectual error since he became a rationalist? [Serious question.]

Comment author: gwern 18 May 2012 07:42:55AM 6 points [-]

Caving to donors is inauspicious.

It's also a double-bind. If you do nothing, you are valuing donors at less than some random speculation which is unusually dubious even by LessWrong's standards, resting as it does on a novel speculative decision theory (acausal trade) whose most obvious requirement (implementing sufficiently similar algorithms) is beyond blatantly false when applied to humans and FAIs. (If you actually believe that SIAI is a good charity, pissing off donors over something like this is a really bad idea, and if you don't believe SIAI is a good charity, well, that's even more damning, isn't it?) And if you delete it, well, you get exactly this stupid mess which is still being dragged up years later.

I hope these impressions aren't accurate. But one thing seems for sure: Eliezer Yudkowsky is not a person for serious self-criticism. Has he admitted any significant intellectual error since he became a rationalist? [Serious question.]

Repudiating most of his long-form works like CFAI and LOGI and CEV isn't admission of error?

Personally, when he was writing the Sequences, I found it a little obnoxious how he kept saying "I was totally on the wrong track and mistaken before I was enlightened & came to understand Bayesian statistics, but now I have a chance of being less wrong" - once is enough, we get it already, I'm not that interested in your intellectual evolution.

Comment author: evand 19 May 2012 08:51:32PM 0 points [-]

Repudiating most of his long-form works like CFAI and LOGI and CEV isn't admission of error?

As someone who hasn't been around that long, it would be interesting to have links. I'm having trouble coming up with useful search terms.

Comment author: gwern 19 May 2012 09:15:00PM 0 points [-]

Creating Friendly AI, Levels of Organization in General Intelligence, and Coherent Extrapolated Volition.

Comment author: evand 19 May 2012 09:42:40PM 0 points [-]

Sorry, I wasn't clear. I meant links to the repudiations. I've read some of the material in CFAI and CEV, but not the retraction, and not yet any of LOGI.

Comment author: gwern 19 May 2012 09:45:43PM 0 points [-]

Oh. I don't remember, then, besides the notes about them being obsolete.