lukeprog comments on Reply to Holden on The Singularity Institute - Less Wrong

46 Post author: lukeprog 10 July 2012 11:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 11 July 2012 08:04:46PM *  1 point [-]

Thank you for understanding. :)

My statement "SI has successfully concentrated lots of attention, donor support, and human capital [and also] has learned many lessons [and] has lots of experience with [these unusual, complicated] issues" was in support of "better to help SI grow and improve rather than start a new, similar AI risk reduction organization", not in support of "SI is capable of mitigating x-risk given money."

However, if I didn't also think SI was capable of reducing x-risk given money, then I would leave SI and go do something else, and indeed will do so in the future if I come to believe that SI is no longer capable of reducing x-risk given money. How to Purchase AI Risk Reduction is a list of things that (1) SI is currently doing to reduce AI risk, or that (2) SI could do almost immediately (to reduce AI risk) if it had sufficient funding.

Comment author: TheOtherDave 11 July 2012 08:22:19PM 1 point [-]

My statement [..] was in support of "better to help SI grow and improve rather than start a new, similar AI risk reduction organization", not in support of "SI is capable of mitigating x-risk given money."

Ah, OK. I misunderstood that; thanks for the clarification.
For what it's worth, I think the case for "support SI >> start a new organization on a similar model" is pretty compelling.

And, yes, the "How to Purchase AI Risk Reduction" series is an excellent step in the direction of making SI's current and planned activities, and how they relate to your mission, more concrete and transparent. Yay you!