I think negative utilitarianism is the most common ethical framework that would cause someone to choose the torture in the specks vs. torture case and no torture in this case. That's because in the specks vs. torture case involves people being harmed in both cases, whereas this case involves people gaining positive utility vs. someone being harmed. Some formulations of negative utilitarianism, like that advocated for by Brian Tomasik, would say that avoiding extreme suffering is the most important moral principle and would therefore argue in favor of avoiding torture in both cases. But a very simple negative utilitarian calculus might favor torture in the first case but not in the second.
I would guess that few people in the rationalist/EA community (and perhaps in the broader world as well) are likely to think that kind of simplistic negative utilitarian calculation is the morally correct one. My guess is that most people would either think that preventing extreme suffering is the most important or that a more standard utilitarian calculus is correct. For a well-reasoned argument against the negative utilitarian formulation, Toby Ord has a discussion of his point of view that's worth checking out.
I'm not sure how negative utilitarianism changes things. Positive and negative utilitarianism are equivalent whenever UFs are bounded and there are no births or deaths as a result of the decision.
Negative utilitarianism interprets this situation as the sadists suffering from boredom which can be slightly alleviated by knowing that the guy they hate is suffering.
Do you mean negative utilitarianism would get them to choose torture, rather than dust specks? I would have considered both to be forms of suffering.
If consequences are completely ignored, I lean towards the torture, but if consequences are considered I would choose no torture out of hope it accelerates moral progress (at least if they had never seen someone who "aught to be tortured" get away, the first one might spark change. which might be good?). In the speck case, I choose torture.
Although under strict preference utilitarianism, wouldn't change in values/moral progress be considered bad, for the same reason a paperclip maximizer would consider it bad?
I should say we assume that we're deciding which one a stable, incorruptible AI should choose. I'm pretty sure any moral system which chose torture in situations like this would not lead to good outcomes if applied in a practical circumstance, but that's not what I'm wondering about, I'm just trying to figure out which outcome is better. In short, I'm asking an axiological question, not a moral one. https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/
My intuition strongly says that the torture is worse here even though I choose torture in the original, but I don't have an argument for this because my normal axiological system, preference utilitarianism, seems to unavoidably say torture is better.
Suppose that instead of the classic version of torture vs specks where the choice is between specks in the eyes of 3^^^3 people or one person tortured for 50 years, there are no specks but rather there are 3^^^3 people who just want the one guy to be tortured. (No particular reason, this just happens to be part of their utility function, which is not up for grabs) The preference of each is mild but somewhat stronger than the preference to not get a speck in one's eye. Is torture the right decision?
I am especially interested in hearing from people who answer differently in this situation than in the original situation.