Lumifer comments on How can I reduce existential risk from AI? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (92)
Why not? I imagine that different political parties have different views on what the government should do about existential risk and voting for the ones that are potentially more willing to decrease it would be beneficial. Currently, it seems like most parties don't concern themselves at all with existential risk, but perhaps this will change once strong AI becomes less far off.
Actually, no, I don't think it is true. I suspect that at the moment the views of all political parties on existential risk are somewhere between "WTF is that?" and "Can I use it to influence my voters?"
That may (or may not) eventually change, but at the moment the answer is a clear "No".
Some parties may be more likely to accelerate scientific progress than others, and those who do could decrease existential risk by decreasing the time spent in high-risk states, for example the period when there are dangerous nano-technological weapons but other astronomical objects have not be colonized. This probably is not enough to justify voting, but I thought I would just let you know.
Noted. I'll invest my efforts on x-risk reduction into something other than voting.