A torturing AI is most likely to happen as a result of deliberate human actions because in many types of negotiations you want to threaten your opponents with the worst possible punishment, for example, convert your population to my religion or I will subject your population to eternal torture by my AIs.
I am trying to decide how to allocate my charitable donations between GiveWell's top charities and MIRI, and I need a probability estimate to make an informed decision. Could you help me?
Background on my moral system: I place a greater value on reducing high doses of suffering of conscious entities than merely preventing death. An unexpected, instant, painless death is unfortunate, but I would prefer it to a painful and chronic condition.
Given my beliefs, it follows logically that I would pay a relatively large amount to save a conscious entity from prolonged torture.
The possibility of an AI torturing many conscious entities has been mentioned1 on this site, and I assume that funding MIRI will help reduce its probability. But what is its current probability?
Obviously a difficult question, but it seems to me that I need an estimate and there is no way around it. I don't even know where to start...suggestions?
1 http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/