You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Tetronian comments on The Singularity Institute's Arrogance Problem - Less Wrong Discussion

63 Post author: lukeprog 18 January 2012 10:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (307)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 19 January 2012 01:35:19AM 12 points [-]

I feel like I've heard this claimed, too, but... where? I can't find it.

Question #5.

Comment author: lukeprog 19 January 2012 01:53:28AM *  5 points [-]

Yup, there it is! Thanks.

Eliezer tends to be more forceful on this than I am, though. I seem to be less certain about how much x-risk is purchased by donating to SI as opposed to donating to FHI or GWWC (because GWWC's members are significantly x-risk focused). But when this video was recorded, FHI wasn't working as much on AI risk (like it is now), and GWWC barely existed.

I am happy to report that I'm more optimistic about the x-risk reduction purchased per dollar when donating to SI now than I was 6 months ago. Because of stuff like this. We're getting the org into better shape as quickly as possible.

Comment author: curiousepic 19 January 2012 04:42:07PM *  9 points [-]

because GWWC's members are significantly x-risk focused

Where is this established? As far as I can tell, one cannot donate "to" GWWC, and none of their recommended charities are x-risk focused.

Comment author: Thrasymachus 05 February 2012 07:44:33PM 2 points [-]

(Belated reply): I can only offer anecdotal data here, but as one of the members of GWWC, many of the members are interested. Also, listening to the directors, most of them are also interested in x-risk issues.

You are right in that GWWC isn't a charity (although it is likely to turn into one), and their recommendations are non-x-risk. The rationale for recommending charities is dependent on reliable data: and x-risk is one of those things where a robust "here's more much more likely happy singularity will be if you give to us" analysis looks very hard.