gwern comments on Thoughts on the Singularity Institute (SI) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1270)
It's also a double-bind. If you do nothing, you are valuing donors at less than some random speculation which is unusually dubious even by LessWrong's standards, resting as it does on a novel speculative decision theory (acausal trade) whose most obvious requirement (implementing sufficiently similar algorithms) is beyond blatantly false when applied to humans and FAIs. (If you actually believe that SIAI is a good charity, pissing off donors over something like this is a really bad idea, and if you don't believe SIAI is a good charity, well, that's even more damning, isn't it?) And if you delete it, well, you get exactly this stupid mess which is still being dragged up years later.
Repudiating most of his long-form works like CFAI and LOGI and CEV isn't admission of error?
Personally, when he was writing the Sequences, I found it a little obnoxious how he kept saying "I was totally on the wrong track and mistaken before I was enlightened & came to understand Bayesian statistics, but now I have a chance of being less wrong" - once is enough, we get it already, I'm not that interested in your intellectual evolution.
As someone who hasn't been around that long, it would be interesting to have links. I'm having trouble coming up with useful search terms.
Creating Friendly AI, Levels of Organization in General Intelligence, and Coherent Extrapolated Volition.
Sorry, I wasn't clear. I meant links to the repudiations. I've read some of the material in CFAI and CEV, but not the retraction, and not yet any of LOGI.
Oh. I don't remember, then, besides the notes about them being obsolete.