jacoblyles comments on Self-skepticism: the first principle of rationality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (105)
I hope SI will agree that the FAQ answer you linked is inadequate (either overlooking some common objections, or lumping them together dismissively as unspecified obstacles that will be revealed in the future). For example, "building an AI seems hard. no human (even given much longer lifespans) or team of humans will ever be smart enough to build something that leads to an intelligence explosion", or "computing devices that can realistically model an entire human brain (even taking shortcuts on parts that turn out to be irrelevant to intelligence) will be prohibitively expensive and slow" are both plausible.
And yes, even if the answer is improved, it does suggest a possible pattern. It could just be a lack of resources available to create high quality, comprehensive answers to objections. Or it could be that SI is slightly more like Uri Geller in not doubting itself than GiveWell is.
Is GiveWell really doubting itself or its premise - that it's worth spending extra money evaluating where to give money? (actually, I think it is worth it, but that's not my point).
As Randaly notes, an FAQ of short answers to common questions is the wrong place to look for in-depth analysis and detailed self-skepticism! Also, the FAQ links directly to papers that do respond in some detail to the objections mentioned.
Another point to make is that SI has enough of a culture of self-skepticism that its current mission (something like "put off the singularity until we can make it go well") is nearly the opposite of its original mission ("make the singularity happen as quickly as possible"). The story of that transition is here.