I don't care at all about the long-term survival of the human race. Is there any reason I should?
Define "long-term", then, as "more than a decade from today". I.e.; "long-term" includes your own available lifespan.
But going to prison would also affect me and the people I care about so it would be a big deal. At least like 25% as bad as the end of humanity.
Would you be so kind as to justify this assertion for me? I find my imagination insufficient to the task of assigning equivalent utility metrics to "me in prison" == 0.25x "end of the species".
Certainly that is true in this case. I'm not going to put a lot of work into developing an elaborate plan to do something that I don't think should be done.
... I really hate it when people reject counterfactuals on the basis of their being counterfactuals alone. It's a dishonest conversational tactic.
Would you be so kind as to justify this assertion for me? I find my imagination insufficient to the task of assigning equivalent utility metrics to "me in prison" == 0.25x "end of the species".
Well, I give equivalent utility to "death of all the people I care about" and "end of the species." Thinking about it harder I feel like "death of all the people I care about" is more like 10-100X worse than my own death. Me going to prison for murder is about as bad as my own death, so its more like .01-.1x end of...
Here's a poser that occurred to us over the summer, and one that we couldn't really come up with any satisfactory solution to. The people who work at the Singularity Institute have a high estimate of the probability that an Unfriendly AI will destroy the world. People who work for http://nuclearrisk.org/ have a very high estimate of the probability that a nuclear war will destroy the world (by their estimates, if you are American and under 40, then nuclear war is the single most likely way in which you might die next year).
It seems like there are good reasons to take these numbers seriously, because Eliezer is probably the world expert on AI risk, and Hellman is probably the world expert on nuclear risk. However, there's a problem - Eliezer is an expert on AI risk because he believes that AI risk is a bigger risk than nuclear war. Similarly, Hellman chose to study nuclear risks and not AI risk I because he had a higher than average estimate of the threat of nuclear war.
It seems like it might be a good idea to know what the probability of each of these risks is. Is there a sensible way for these people to correct for the fact that the people studying these risks are those that have high estimate of them in the first place?