Eugine_Nier comments on Utilitarianism and Relativity Realism - Less Wrong

-3 Post author: TruePath 23 June 2014 07:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: dankane 22 June 2014 07:33:27AM 5 points [-]

So utilitarianism has known paradoxes if you allow infinite positive/negative utilities (basically because infinite sums don't always behave well). On the other hand, if you restrict yourself, say to situations that only last finitely long all these paradoxes go away. If both devices last for the same amount of subjective time, this holds true in all reference frames, and thus in all reference frames you can say that the situations are equally good.

Comment author: Eugine_Nier 22 June 2014 06:10:29PM 3 points [-]

On the other hand, if you restrict yourself, say to situations that only last finitely long all these paradoxes go away.

If you restrict to finitely long situations, you wind up with weird effects at the cutoff window.

Comment author: dankane 25 June 2014 03:50:01AM 0 points [-]

This isn't a problem if you believe that there will only ever be finitely many people. Or if you exponentially discount (in some relativistically consistent manner) at an appropriate rate.

Comment author: Manfred 23 June 2014 10:28:31PM *  0 points [-]

Caring about times within some time limit in a single reference frame is sufficient.

Comment author: Eugine_Nier 24 June 2014 12:51:06AM 2 points [-]

The problem with a time limit is that it encourages you to not care what happens afterwards.

Comment author: Manfred 24 June 2014 03:56:59AM 0 points [-]

Hm, I think any integrable time-discounting function would also work. And the trouble with an AI that doesn't time-discount is that it gets Pascal's mugged by literally any chance of eternity.