I think the scenario of an AI torturing humans in the future is very, very unlikely. For most possible goals an AI could have, it will have ways to accomplish them that are more effective than torturing humans.
The chance of an AI torturing humans as a means to some other goal does seem low, but what about the AI torturing humans as a end in itself? I think CEV could result in this with non-negligible probability (>0.000001). I wouldn't be surprised if the typical LessWrong poster has very different morality than the majority of the population, so our intuition of the results of CEV could be very wrong.
I am trying to decide how to allocate my charitable donations between GiveWell's top charities and MIRI, and I need a probability estimate to make an informed decision. Could you help me?
Background on my moral system: I place a greater value on reducing high doses of suffering of conscious entities than merely preventing death. An unexpected, instant, painless death is unfortunate, but I would prefer it to a painful and chronic condition.
Given my beliefs, it follows logically that I would pay a relatively large amount to save a conscious entity from prolonged torture.
The possibility of an AI torturing many conscious entities has been mentioned1 on this site, and I assume that funding MIRI will help reduce its probability. But what is its current probability?
Obviously a difficult question, but it seems to me that I need an estimate and there is no way around it. I don't even know where to start...suggestions?
1 http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/