army1987 comments on Humans are utility monsters - Less Wrong

67 Post author: PhilGoetz 16 August 2013 09:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 23 August 2013 10:05:49PM 1 point [-]

This can be resolved by taking a timeless view of the population, so that someone still counts as part of the average even after they die.

In that view, does someone already counts as part of the average even before they are born?

Comment author: TheOtherDave 24 August 2013 05:57:48AM 2 points [-]

I would think so. Of course, that's not to say we know that they count... my confidence that someone who doesn't exist once existed is likely much higher, all else being equal, than my confidence that someone who doesn't exist is going to exist.

This should in no way be understood as endorsing the more general formulation.

Comment author: selylindi 28 August 2013 03:30:39PM *  0 points [-]

Yes and no. Yes in that the timeless view is timeless in both directions. No in that for decisionmaking we can only take into account predictions of the future and not the future itself.

For intuitive purposes, consider the current political issues of climate change and economic bubbles. It might be the case that we who are now alive could have better quality of life if we used up the natural resources and if we had the government propogate a massive economic bubble that wouldn't burst until after we died. If we don't value the welfare of possible future generations, we should do those things. If we do value the welfare of possible future generations, we should not do those things.

For technical purposes, suppose we have an AIXI-bot with a utility function that values human welfare. Examination of the AIXI definition makes it clear that the utility function is evaluated over the (predicted) total future. (Entertaining speculation: If the utility function was additive, such an optimizer might kill off those of us using more than our share of resources to ensure we stay within Earth's carrying capacity, making it able to support a billion years of humanity; or it might enslave us to build space colonies capable of supporting unimaginable throngs of future happier humans.)

For philosophical purposes, there's an important sense in which my brainstates change so much over the years that I can meaningfully, if not literally, say "I'm not the same person I was a decade ago", and expect that the same will be true a decade from now. So if I want to value my future self, there's a sense in which I necessarily must value the welfare of some only-partly-known set of possible future persons.

Comment author: Fronken 23 August 2013 10:31:49PM 0 points [-]

Presumably, only if they get born. Although that's tweakable.