beoShaffer comments on The Singularity Institute's Arrogance Problem - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (307)
(I hope this doesn't come across as overly critical because I'd love to see this problem fixed. I'm not dissing rationality, just its current implementation. You have declared Crocker's Rules before, so I'm giving you an emotional impression of what your recent rationality propaganda articles look like to me, and I hope that doesn't come across as an attack, but something that can be improved upon.)
I think many of your claims of rationality powers (about yourself and other SIAI members) look really self-congratulatory and, well, lame. SIAI plainly doesn't appear all that awesome to me, except at explaining how some old philosophical problems have been solved somewhat recently.
You claim that SIAI people know insane amounts of science and update constantly, but you can't even get 1 out of 200 volunteers to spread some links?! Frankly, the only publicly visible person who strikes me as having some awesome powers is you, and from reading CSA, you seem to have had high productivity (in writing and summarizing) before you ever met LW.
Maybe there are all these awesome feats I just never get to see because I'm not at SIAI, but I've seen similar levels of confidence in your methods and weak results in the New Age circles I hung out in years ago. Your beliefs are much saner, but as long as you can't be more effective than them, I'll always have a problem taking you seriously.
In short, as you yourself noted, you lack a Tim Ferriss. Even for technical skills, there isn't much I can point at and say, "holy shit, this is amazing and original, I wanna learn how to do that, have all my monies!".
(This has little to do with the soundness of SIAI's claims about Intelligence Explosion etc., though, but it does decrease my confidence that conclusions reached through your epistemic rationality are to be trusted if the present results seem so lacking.)
Building off of this and my previous comment, I think that more and more visible rationality verification could help. First off, opening your ideas up to tests generally reduces perceptions of arrogance. Secondly, successful results would have similar effects to the technical accomplishments I mentioned above. (Note I expect wide scale rationality verification to increase the amount of pro-LW evidence that can be easily presented to outsiders, not for it to increase my own confidence. Thus this isn't in conflict with the conservation of evidence.)