Me going to prison for murder is about as bad as my own death, so its more like .01-.1x end of humanity. Can you imagine that?
I'm curious, now, as to what nation or state you live in.
Thinking about it harder I feel like "death of all the people I care about" is more like 10-100X worse than my own death.
Well -- in this scenario you are "going to die" regardless of the outcome. The only question is whether the people you care about will. Would you kill others (who were themselves also going to die if you did nothing) and allow yourself to die, if it would save people you cared about?
(Also, while it can lead to absurd consequences -- Eliezer's response to the Sims games for example -- might I suggest a re-examination of your internal moral consistency? As it stands it seems like you're allowing many of your moral intuitions to fall in line with evolutionary backgrounds. Nothing inherently wrong with that -- our evolutionary history has granted us a decent 'innate' morality. But we who 'reason' can do better.)
I was considering writing a long thing about your overconfidence in thinking you could carry out such a plan without any (I am presuming) experience doing that kind of thing.
I didn't list any plan. This was intentional. I'm not going to give pointers to others who might be seeking them out for reasons I personally haven't vetted on how to do exactly what this topic entails. That, unlike what some others have criticized about this conversation, actually would be irresponsible.
That being said, the fact that you're addressing this to the element you are is really demonstrating a further non-sequitor. It doesn't matter whether or not you believe the scenario plausible: what would your judgment of the rightfulness of carrying out the action yourself int he absence of democratic systems be?
that everyone else on LW seems to hate so y bother?
Why allow your opinions to be swayed by the emotional responses of others?
In my case, I'm currently sitting at -27 on my 30-day karma score. That's not even the lowest I've been in the last thirty days. I'm not really worried about my popularity here. :)
Here's a poser that occurred to us over the summer, and one that we couldn't really come up with any satisfactory solution to. The people who work at the Singularity Institute have a high estimate of the probability that an Unfriendly AI will destroy the world. People who work for http://nuclearrisk.org/ have a very high estimate of the probability that a nuclear war will destroy the world (by their estimates, if you are American and under 40, then nuclear war is the single most likely way in which you might die next year).
It seems like there are good reasons to take these numbers seriously, because Eliezer is probably the world expert on AI risk, and Hellman is probably the world expert on nuclear risk. However, there's a problem - Eliezer is an expert on AI risk because he believes that AI risk is a bigger risk than nuclear war. Similarly, Hellman chose to study nuclear risks and not AI risk I because he had a higher than average estimate of the threat of nuclear war.
It seems like it might be a good idea to know what the probability of each of these risks is. Is there a sensible way for these people to correct for the fact that the people studying these risks are those that have high estimate of them in the first place?