Benja comments on How can I reduce existential risk from AI? - Less Wrong

46 Post author: lukeprog 13 November 2012 09:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread.

Comment author: Benja 11 November 2012 11:48:37PM *  5 points [-]

I don't currently know of a group pushing (...) for banning AGI development. You could accelerate AGI by investing in AGI-related companies (...)

This is not meant as a criticism of the post, but it seems like we should be able to do better than having some of us give money to groups pushing for banning AGI development, and others invest in AGI-related companies to accelerate AGI, especially if both of these are altruists with a reasonably similar prior aiming to reduce existential risk...

(Both giving to strategic research instead seems like a reasonable alternative.)

Comment author: lukeprog 12 November 2012 01:18:51AM 8 points [-]

Right... it's a bit like in 2004 when my friend insisted that we both waste many hours to go vote on the presidential election, even though we both knew we were voting for opposite candidates. It would have been wiser for us both to stay home and donate to something we both supported (e.g. campaign finance reform), in whatever amount reflected the value of the time we actually spent voting.

I should note that investing in an AGI company while also investing in AGI safety research need not be as contradictory as it sounds, if you can use your investment in the AGI company to bias its development work toward safety, as Legg once suggested. In fact, I know at least three individuals (that I shall not name) who appear to be doing exactly this.