Lumifer comments on How can I reduce existential risk from AI? - Less Wrong

46 Post author: lukeprog 13 November 2012 09:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 27 May 2015 04:47:35PM 4 points [-]

I imagine that different political parties have different views on what the government should do about existential risk

Actually, no, I don't think it is true. I suspect that at the moment the views of all political parties on existential risk are somewhere between "WTF is that?" and "Can I use it to influence my voters?"

That may (or may not) eventually change, but at the moment the answer is a clear "No".

Comment author: G0W51 09 October 2015 05:59:31AM 0 points [-]

Some parties may be more likely to accelerate scientific progress than others, and those who do could decrease existential risk by decreasing the time spent in high-risk states, for example the period when there are dangerous nano-technological weapons but other astronomical objects have not be colonized. This probably is not enough to justify voting, but I thought I would just let you know.

Comment author: G0W51 30 May 2015 10:48:52PM 0 points [-]

Noted. I'll invest my efforts on x-risk reduction into something other than voting.