G0W51 comments on How can I reduce existential risk from AI? - Less Wrong

46 Post author: lukeprog 13 November 2012 09:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread. Show more comments above.

Comment author: G0W51 25 May 2015 02:22:39AM 0 points [-]

Yeah, I suppose you're right. Still, once something that could pose a large existential risk comes into existence or looks like it will soon come into existence, wouldn't politicians then consider existential risk reduction? For example, once a group is on the verge of developing AGI, wouldn't the government think about what to do about it? Or would they still ignore it? Would the responses of different parties vary?

You could definitely be correct, though; I'm not knowledgeable about politics.

Comment author: ChristianKl 25 May 2015 12:51:21PM 0 points [-]

Politics is a people sport. Depending on who creates the policy of the party in the time the topic comes up, the results can come out very differently.