Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: torekp 28 November 2014 01:34:02AM 1 point [-]

Or you could just follow Michael Huemer and embrace the repugnant conclusion <pdf>.

Comment author: Stuart_Armstrong 28 November 2014 09:42:21AM 0 points [-]

I could, but see absolutely no reason to.

Comment author: wedrifid 26 November 2014 02:34:28AM 1 point [-]

I gave two TEDx talks in two weeks (also a true statement: I gave two TEDx talks in 35 years), one cosmic colonisation, one on xrisks and AI.

I'm impressed. (And will look them up when I get a chance.)

Comment author: Stuart_Armstrong 26 November 2014 11:46:45AM 1 point [-]

They are not out yet; the wheels of TEDx videos move slowly and mysteriously.

Comment author: Stuart_Armstrong 25 November 2014 10:06:28PM 9 points [-]

I gave two TEDx talks in two weeks (also a true statement: I gave two TEDx talks in 35 years), one cosmic colonisation, one on xrisks and AI.

Comment author: Lukas_Gloor 24 November 2014 05:36:17PM 2 points [-]

According to how I understand the proposed view (which might well be wrong!), there seems to be a difficulty that your natural zero affects how to do tradeoffs with the welfare of pre-existing beings. How would the view deal with the following cases:

Case_1: Agent A has the means to bring being B into existence, but if no further preparations are taken, B will be absolutely miserable. If agent A takes away resources from pre-existing being C in order to later give them to B, thereby causing a great deal of suffering to C, B's life-prospects can be improved to a total welfare of slightly above zero. If the natural zero is sufficiently negative, would such a transaction be permissible?

Case_2: If it's not permissible, it seems that we must penalize cases where the natural zero starts out negative. However, how about a case where the natural zero is just slightly negative, but agent A only needs to invest a tiny effort in order to guarantee being B a hugely positive life. Would that always be impermissible?

Comment author: Stuart_Armstrong 24 November 2014 06:01:42PM 2 points [-]

This is the tricky issue of dealing with natural zeros that are below the "zero" of happy/meaningful lives (whatever that is).

As I said, this isn't my favourite setup, but I would advocate requiring the natural zero be positive, and not bringing anyone into existence otherwise. That means that I'd have to reject Case_2 - unless there is anyone who would be sufficiently happy about the existence of B with a hugely positive life, that the tiny effort is less than that happiness.

Total utilitarians, your own happiness can make people come into existence even in these non-total ut situations!

Comment author: joaolkf 24 November 2014 04:53:58PM 2 points [-]

Not sure if directly related, but some people (e.g. Alan Carter) suggest having indifference curves. These consist of isovalue curves on a plane with average happiness and amount of happy people as axes, each curve corresponding to the same amount of total utility. The Repugnant Conclusion scenario would be nearly flat on the amount of happy people axis and the a fully satisfied Utility Monster nearly flat on the average happiness axis. It seems this framework produces similar results as yours. Every time you create a being slightly less happy than the average you have a gain in the amount of happy people but a loss in average happiness and might end up with the exact same total utility.

Comment author: Stuart_Armstrong 24 November 2014 05:30:37PM 0 points [-]

Yep, I've seen that idea. It's quite neat, and allows hyperbolic indifference curves, which are approximately what you want.

Population ethics and utility indifference

3 Stuart_Armstrong 24 November 2014 03:18PM

It occurs to me that the various utility indifference approaches might be usable in population ethics.

One challenge for non-total utilitarians is how to deal with new beings. Some theories - average utilitarianism, for instance, or some other systems that use overall population utility - have no problem dealing with this. But many non-total utilitarians would like to see creating new beings as a strictly neutral act.

One way you could do this is by starting with a total utilitarian framework, but subtracting a certain amount of utility every time a new being B is brought into the world. In the spirit of utility indifference, we could subtract exactly the expected utility that we expect B to enjoy during their life.

This means that we should be indifferent as to whether B is brought into the world or not, but, once B is there, we should aim to increase B's utility. There are two problems with this. The first is that, strictly interpreted, we would also be indifferent to creating people with negative utility. This can be addressed by only doing the "utility correction" if B's expected utility is positive, thus preventing us from creating beings only to have them suffer.

The second problem is more serious. What about all the actions that we could do, ahead of time, in order to harm or benefit the new being? For instance, it would seem perverse to argue that buying a rattle for a child after they are born (or conceived) is an act of positive utility, whereas buying it before they were born (or conceived) would be a neutral act, since the increase in expected utility for the child is cancel out by the above process. Not only is it perverse, but it isn't timeless, and isn't stable under self modification.

continue reading »
Comment author: John_Maxwell_IV 23 November 2014 09:09:10AM 5 points [-]

This essay by a business school prof argues that companies are irrationally demanding in who they choose to hire: http://www.upenn.edu/gazette/0113/feature2_1.html

Comment author: Stuart_Armstrong 23 November 2014 02:19:09PM 0 points [-]

Thanks!

Comment author: Stuart_Armstrong 20 November 2014 03:41:42PM 1 point [-]

For this problem: it's not whether the guy has solved AI, it's whether the guy is more likely than other people to have solved AI (more exactly, of all the actions you could do to increase the chance of FAI, is interacting with this guy the most helpful?).

Comment author: Stuart_Armstrong 20 November 2014 03:40:29PM 2 points [-]

Great!

Comment author: owencb 20 November 2014 01:11:25PM 1 point [-]

Thanks, nice write-up.

Solution 1 seems to see quite a lot of use in the world (often but not always in conjunction with 4): one player will set a price without reference to the other player's utility function, setting up an ultimatum.

Comment author: Stuart_Armstrong 20 November 2014 02:04:11PM 0 points [-]

But setting a price is an iterative process, depending on how much of the good is purchased...

View more: Next