Most of the usual thought experiments that justify expected utilitarialism trade off fun for fun, or suffering for suffering. Here's a situation which mixes the two. You are offered to press a button that will select a random person (not you) and torture them for a month. In return the machine will make N people who are not suffering right now have X fun each. The fun will be of the positive variety, not saving any creatures from pain.
1) How large would X and N have to be for you to accept the offer?
2) If you say X or N must be very large, does this prove that you measure torture and fun using in effect different scales, and therefore are a deontologist rather than a utilitarian?
You're assuming that people make decisions consistently. I know people are inconsistent, so I'm only interested in their answers to this concrete question, not in what would be "logically implied" by the rest of their behavior.
Most of the usual thought experiments that justify expected utilitarialism trade off fun for fun, or suffering for suffering. Here's a situation which mixes the two. You are offered to press a button that will select a random person (not you) and torture them for a month. In return the machine will make N people who are not suffering right now have X fun each. The fun will be of the positive variety, not saving any creatures from pain.
1) How large would X and N have to be for you to accept the offer?
2) If you say X or N must be very large, does this prove that you measure torture and fun using in effect different scales, and therefore are a deontologist rather than a utilitarian?