If you believe my moral system (not the topic of this post) is patently absurd, please PM me the full version of your argument. I promise to review it with an open mind. Note: I am naturally afraid of torture outcomes, but that doesn't mean I'm not excited about FAI. That would be patently absurd.
Torture mostly comes up because philosophical thought-experiments...
To clarify: are you saying there is no chance of torture?
Yes, I am saying that the scenario you allude to is vanishingly unlikely.
But there's another point, which cuts close to the core of my values, and I suspect it cuts close to the core of your values, too. Rather than explain it myself, I'm going to suggest reading Scott Alexander's Who By Very Slow Decay, which is about aging.
That's the status quo. That's one of the main the reasons I, personally, care about AI: because if it's done right, then the thing Scott describes won't be a part of the world anymore.
I am trying to decide how to allocate my charitable donations between GiveWell's top charities and MIRI, and I need a probability estimate to make an informed decision. Could you help me?
Background on my moral system: I place a greater value on reducing high doses of suffering of conscious entities than merely preventing death. An unexpected, instant, painless death is unfortunate, but I would prefer it to a painful and chronic condition.
Given my beliefs, it follows logically that I would pay a relatively large amount to save a conscious entity from prolonged torture.
The possibility of an AI torturing many conscious entities has been mentioned1 on this site, and I assume that funding MIRI will help reduce its probability. But what is its current probability?
Obviously a difficult question, but it seems to me that I need an estimate and there is no way around it. I don't even know where to start...suggestions?
1 http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/