Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Armok_GoB 12 December 2014 10:31:53AM 1 point [-]

It has near maximal computational capacity, but that capacity isn't being "used" for anything in particular that is easy to determine.

This is actually a very powerful criteria, in terms of number of false positive and negatives. Sadly, the false positives it DOES have still far outweigh the genuine positives, and includes all the WORST outcomes (aka, virtual hells) as well.

Comment author: Stuart_Armstrong 15 December 2014 01:07:30PM *  1 point [-]

Interesting. Is this kinda like a minimum complexity of outcome requirement?

Comment author: Stuart_Armstrong 11 December 2014 01:05:52PM 5 points [-]

The main difference between a utility function based approach is that there is no concept of "sufficient effort". Every action gets an (expected) utility attached to it. Sending £10 to an efficient charity is X utilitons above not doing so; but selling everything you own to donate to the charity is (normally) even higher.

So I think the criticism is accurate, in that humans almost never achieve perfection following utility; there's always room for more effort, and there's no distinction between actions that are "allowed" versus "required" (as other ethical systems sometimes have). So for certain types of mind (perfectionists or those with a high desire for closure), a utility function based morality demands ever more: they can never satisfy the requirements of their morality. Those who are content with "doing much better" rather than "doing the absolute best" won't find it so cripling.

Or, put more simply, for a utility function based approach, no one is going to (figuratively) hand you a medal and say "well done, you've done enough". Some people see this as being equivalent with "you're obliged to do the maximum."

Comment author: owencb 08 December 2014 07:43:51PM 0 points [-]

Yeah, it definitely seems like we're talking past each other here. I think I don't understand what you mean by "aggregation" -- I have a different impression from this comment than from the opening post. Perhaps you can clarify that?

Not sure if this is relevant: From a utilitarian point of view I think you can aggregate when creating lives, but of course the counterfactuals you'll use will change (as mostly what you're trying to work out is how good creating a life is).

Comment author: Stuart_Armstrong 09 December 2014 06:10:57PM 0 points [-]

Let me try and be careful and clear here.

What I meant by "aggregation" is that when we have to choose between X and Y once, we may have unclear intuitions, but if we have to choose between X and Y multiple times (given certain conditions), the choice is clear (and is Y, for example).

There are two intuitive examples of this. The first is when X causes a definite harm and Y causes a probability of harm, as in http://lesswrong.com/lw/1d5/expected_utility_without_the_independence_axiom/ . The second is the example I gave here, where X causes harm to a small group while Y causes smaller harm to a larger group.

Now, the "certain conditions" can be restrictive (here it is applied repeatedly to a fixed population). I see these aggregation arguments as providing at least some intuitive weight to the idea that Y>X even in the one-shot case. However, as far as I can tell, this aggregation argument (or anything similar) is not available for creating populations. Or do you see an analogous idea?

Comment author: mwengler 09 December 2014 03:38:20PM 0 points [-]

Stuart, since you asked I spent a little bit of time to write up what I had found and include a bunch more figures. If you are interested, they can be found here: http://kazart.blogspot.com/2014/12/stock-price-volatility-log-normal-or.html

Comment author: Stuart_Armstrong 09 December 2014 05:57:26PM 0 points [-]

Cheers!

Comment author: owencb 08 December 2014 02:04:19PM 1 point [-]

Sorry, I don't see the force of your argument here. Because my intuition about the scenarios is dominated by the effect of creating people, which we certainly wouldn't expect to be zero for total utilitarianism, I can't see whether there should be any distinction for aggregation.

Would you be happy changing the second scenario so that we create 3^^^3 people in either case, as AABoyles suggested? If we did that total utilitarianism would say that we should treat it the same as the first case (but my intuition also says this). Or if not that, can you construct another example to factor out the life-creation aspect which is driving most of the replies?

Comment author: Stuart_Armstrong 08 December 2014 07:04:36PM -1 points [-]

? I don't see your point. You can use aggregation as an argument to be more utilitarian when not creating extra people. But you can't use it when creating lives, as you point out. So the argument is unavailable in this context.

That's the whole point of the post, which I seem to have failed to make clear. Aggregation arguments are available for already created lives, not for the new creation of them.

Comment author: Eniac 08 December 2014 01:20:29AM 2 points [-]

so either civilizations are expending to less than 1000 stars on average, or they're not using radio waves, or our guesses about how common they are are wrong

Absent FTL communication, it is hard to imagine a scenario in which any central control remains after civilization has spread to more than a few stars. There would be no stopping the expansion after that, so the first explanation is unlikely.

A civilization whose area of expansion includes our own solar system would be perceivable by many means other than radio, so the second explanation is really not relevant.

That leaves the third as the most likely explanation, I am afraid.

Comment author: Stuart_Armstrong 08 December 2014 09:11:08AM 2 points [-]

Absent FTL communication, it is hard to imagine a scenario in which any central control remains after civilization has spread to more than a few stars.

Each expansion part is led by an AI with a shared utility function, and a specified way of resolving negotiations.

Comment author: peter_hurford 04 December 2014 05:30:36PM 4 points [-]

A key difference is that when you're creating people, you're creating all their experiences in addition to the micro-torture, so if we expect those new lives to be good on balance, that's a net gain rather than a loss. So you'd prefer to create the 3^^^^3 people with micro-torture.

However, when you're not creating, you're just adding micro-torture, that's definitely a net loss. Turns out that the sum of the micro-losses is larger than the larger loss of 50 years.

Thus, the asymmetry.

Comment author: Stuart_Armstrong 08 December 2014 09:07:09AM 0 points [-]

From my edit: the purpose of this post is simply to show that there is a difference between certain reasoning for already existing and potential people. I don't argue that aggregation is the only difference, nor (in this post) that total utilitarianism for potential people is wrong. Simply that the case for existing people is stronger than for potential people.

Comment author: Unknowns 04 December 2014 04:35:46PM 1 point [-]

The other comments are correct. This does not mean that the goodness or badness involved in creating people does not add together. It simply means that when you create someone, you need to take into account the fact that you have created a person, not only the torture, while in the first case you have to take into account only the torture, since the people are already there.

Comment author: Stuart_Armstrong 08 December 2014 09:06:43AM 0 points [-]

As I said in my edit: the purpose of this post is simply to show that there is a difference between certain reasoning for already existing and potential people. I don't argue that aggregation is the only difference, nor (in this post) that total utilitarianism for potential people is wrong. Simply that the case for existing people is stronger than for potential people.

Comment author: gjm 05 December 2014 11:52:20AM 1 point [-]

For the reason I, and owencb, and Unknowns, and peter_hurford, have all given, there is a very big difference between the scenarios that at least on the face of it has nothing to do with aggregation.

Differences related specifically to aggregation may also be relevant, but I don't think this can be the right example to illustrate this because what it mostly illustrates is that for most of us a whole human life has a lot more moral weight than one millisecond of torture (assuming, again, that "one millisecond of torture" actually denotes anything meaningful).

You might want to consider either finding a different example, or explaining why it's a good example after all in some more convincing way than just saying "But it is".

Comment author: Stuart_Armstrong 08 December 2014 09:05:50AM 0 points [-]

See my edit: the purpose of this post is simply to show that there is a difference between certain reasoning for already existing and potential people. I don't argue that aggregation is the only difference, nor (in this post) that total utilitarianism for potential people is wrong.

Comment author: owencb 04 December 2014 03:32:17PM 6 points [-]

I think they're very different, and I don't think this is due to aggregation.

In the first case, the major difference between the option is the suffering caused. In the second case, the major difference between the two cases is the lives created. The suffering caused seems a very small side-effect in comparison.

In order to drive an intuition that the two cases are the same, you'd have to think that it was exactly neutral to create a person (before the millisecond of torture). This strikes me as highly implausible on any kind of consequentialist position. Even non-consequentialist positions have to have some view of what's appropriate here -- and while you can reasonably get creating the people to be incommensurably good with not doing so, it's similarly implausible to get them to be exactly as good as each other.

Comment author: Stuart_Armstrong 08 December 2014 08:57:46AM *  0 points [-]

Thanks for strengthening my case ^_^

I was trying to demonstrate that the argument for total utilitarianism among existing populations was stronger than for potential populations. I could have mentioned this aspect - I vaguely referred to it in "and not only because the second choice seems more underspecified than the first." But I thought that would be more contentious and debatable, and so focused on the clearest distinction I saw.

View more: Next