Eugine_Nier comments on Utilitarianism and Relativity Realism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (30)
If you restrict to finitely long situations, you wind up with weird effects at the cutoff window.
This isn't a problem if you believe that there will only ever be finitely many people. Or if you exponentially discount (in some relativistically consistent manner) at an appropriate rate.
Caring about times within some time limit in a single reference frame is sufficient.
The problem with a time limit is that it encourages you to not care what happens afterwards.
Hm, I think any integrable time-discounting function would also work. And the trouble with an AI that doesn't time-discount is that it gets Pascal's mugged by literally any chance of eternity.