Nisan comments on Open thread, 24-30 March 2014 - Less Wrong

6 Post author: Metus 25 March 2014 07:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread.

Comment author: Nisan 25 March 2014 04:55:25PM *  5 points [-]

I welcome criticism of my new personal favorite population axiology:

The value of a world-history that extends the current world-history is the average welfare of every life after the present moment. For people who live before and after the current moment, we need to evaluate the welfare of the portion of their life after the current moment. The welfare of a person's life is allowed to vary nonlinearly with the number of years the person lives a certain kind of life, and it's allowed to depend on whether the person's experiences are veridical.

This axiology implies that it's important to ensure that the future will contain many people who have better lives than us; it's consistent with preferring to extend someone's life by N years rather than creating a new life that lasts N years. It's immune to Parfit's Repugnant Conclusion, but doesn't automatically fall prey to the opposite of the Repugnant Conclusion. It implies that our decisions should not depend on whether the past contained a large, prosperous civilization.

There are straightforward modifications for dealing with general relativity and splitting and merging people.

The one flaw is that it's temporally consistent: If future generations average the welfare of lives after their "present moments", they will make decisions we disapprove of.

Comment author: solipsist 25 March 2014 11:52:41PM 3 points [-]

I build a robot that hibernates until the last person presently alive dies, then exterminates all people who are poor, unhappy, or don't like my robot. Good thing?

Comment author: Nisan 26 March 2014 04:43:45PM 2 points [-]

A person that has a life worth living could have the welfare of their life increase monotonically with their lifespan. In that case, ending a life usually makes the world-history worse.

Comment author: philh 25 March 2014 06:53:09PM *  2 points [-]

If future generations average the welfare of lives after their "present moments", they will make decisions we disapprove of.

Can you give an example? It seems to me that if they decide at t_1 to maximise average welfare from t_1 to ∞, then given that welfare from t_0 to t_1 is held fixed, that decision will also maximise average welfare from t_0 to ∞.

Edit: oh, I was thinking of an average over time, not people.

Comment author: Nisan 25 March 2014 08:26:20PM 4 points [-]

Earth produces a long and prosperous civilization. After nearly all the resources are used up, the lean and hardscrapple survivors reason, "let's figure out how to squeeze the last bits of computation out of the environment so that our children will enjoy a better life than us before our species goes extinct". But from our perspective, those children won't have as much welfare as the vast majority of human lives in our future, so those children being born would bring our average down. We would will the hardscrapple survivors to not produce more people.

Comment author: shminux 25 March 2014 06:30:38PM 0 points [-]

it's important to ensure that the future will contain many people who have better lives than us

Are you swiping the complexity of value under the terms "better" and "veridical"? Does following your axiology prevent humanity from evolving into a race of happy-go-lucky clones?

Comment author: Nisan 25 March 2014 08:30:00PM 3 points [-]

Are you swiping the complexity of value under the terms "better" and "veridical"?

Yes. It's hard enough to come up with a decent way of aggregating individual welfares without making a comprehensive theory of value.

Comment author: VAuroch 28 March 2014 06:45:18PM -1 points [-]

on whether the person's experiences are veridical.

Is this different from whether their perception of their experiences is correct, or is it jargon?

Comment author: Nisan 28 March 2014 07:03:36PM 1 point [-]

Yes, I mean (for example) that if a person believes they're married to someone, their life's welfare could depend on whether their spouse is a real person or if it's a simple chatbot. Also, if a person feels that they've discovered a deep insight, their life's welfare could depend on whether they have actually discovered such an insight.

Comment author: VAuroch 28 March 2014 08:38:30PM -2 points [-]

So it's just jargon. OK.