Comment author: shminux 28 July 2013 09:42:01PM *  1 point [-]

Do you assign any negative weight to a suffering chicken? For example, is it OK to simply rip off a leg from a live one and make a dinner, while the injured bird is writhing on the ground slowly bleeding to death?

Comment author: Adriano_Mannino 28 July 2013 10:37:36PM *  8 points [-]

Good question, shminux. Another way of putting it: If cows and chickens don't count, why have any animal protection laws? Their guiding principle usually is the avoidance of unnecessary animal suffering. And if we agree that eating animals (1) causes animal suffering and is (2) unnecessary because we can have animal-free foods that are equally tasty, then the guiding principle of the agreed upon animal protection laws actually already implies that we should stop farming chickens and cows.

Note also that the Three Rs – which guide animal testing in many countries – reaffirm the above principle. Many believe it should be illegal to cause any animal suffering if it's unnecessary, i.e. if there is an acceptable alternative for the purpose. (And if there is not, we are under an obligation to try and create one.) If we take seriously what almost everybody accepts when it comes to animal testing, we should stop farming animals.

It seems that the arguments for according non-human animals a very important place in our practical ethics can only be blocked by claiming that they/their suffering matters zero. If it matters just a little, the aggregate animal suffering is still likely to matter a lot. And even if we are inclined to believe that it matters zero, we should retain some non-negligible uncertainty, at least if our view (like Jeff's) is based on the claim that some not-really-understood (!) combination of suffering with self-awareness, intelligence or other preferences is what makes for moral badness. If we are wrong on this one, the consequences will be catastrophic. We should take this into account.

Comment author: lukeprog 19 July 2013 01:51:40AM *  1 point [-]

I confess I'm not that motivated to tweak the sentence even further, since it seems like a small semantic point, I don't understand the advantages to your phrasing, and I've provided links to more thorough discussions of these issues, for example Beckstead's dissertation. Maybe it would help if you explained what kind of reasoning you are using to identify which claims are "doing the crucial work"? Or we could just let it be.

Comment author: Adriano_Mannino 19 July 2013 04:32:31AM 7 points [-]

Yeah, I've read Nick's thesis, and I think the moral urgency of filling the universe with people is the more important basis of his conclusion than the rejection of time preference. The sentence suggests that the rejection of time preference is most important.

If I get him right, Nick agrees that the time issue is much less important than you suggested in your recent interview.

Sorry to insist! :) But when you disagree with Bostrom's and Beckstead's conclusions, people immediately assume that you must be valuing present people more than future ones. And I'm constantly like: "No! The crucial issue is whether the non-existence of people (where there could be some) poses a moral problem, i.e. whether it's morally urgent to fill the universe with people. I doubt it."

Comment author: lukeprog 17 July 2013 05:51:02AM *  3 points [-]

Right, the first clause is there as a necessary but not sufficient part of the standard reason for focusing on the far future, and the sentence works now that I've removed the "therefore."

The reason I'd rather not phrase things as "morally urgent to bring new people into existence" is because that phrasing suggests presentist assumptions. I'd rather use a sentence with non-presentist assumptions, since presentism is probably rejected by a majority of physicists by now, and also rejected by me. (It's also rejected by the majority of EAs with whom I've discussed the issue, but that's not actually noteworthy because it's such a biased sample of EAs.)

Comment author: Adriano_Mannino 19 July 2013 01:08:48AM 9 points [-]

Is it the "bringing into existence" and the "new" that suggests presentism to you? (Which I also reject, btw. But I don't think it's of much relevance to the issue at hand.) Even without the "therefore", it seems to me that the sentence suggests that the rejection of time preference is what does the crucial work on the way to Bostrom's and Beckstead's conclusions, when it's rather the claim that it's "morally urgent/required to cause the existence of people (with lives worth living) that wouldn't otherwise have existed", which is what my alternative sentence was meant to mean.

Comment author: lukeprog 13 July 2013 03:17:00AM 1 point [-]

Okay, I removed the "therefore."

Comment author: Adriano_Mannino 15 July 2013 09:16:32PM 7 points [-]

OK, but why would the sentence start with "Many EAs value future people roughly as much as currently-living people" if there wasn't an implied inferential connection to nearly all value being found in astronomical far future populations? The "therefore" is still implicitly present.

It's not entirely without justification, though. It's true that the rejection of a (very heavy) presentist time preference/bias is necessary for Bostrom's and Beckstead's conclusions. So there's weak justification for your "therefore": The rejection of presentist time preference makes the conclusions more likely.

But it's by no means sufficient for them. Bostrom and Beckstead need the further claim that bringing new people into existence is morally important and urgent.

This seems to be the crucial point. So I'd rather go for something like: "Many (Some?) EAs value/think it morally urgent to bring new people (with lives worth living) into existence, and therefore..."

The moral urgency of preventing miserable lives (or life-moments) is less controversial. People like Brian Tomasik place much more (or exclusive) importance on the prevention of lives not worth living, i.e. on ensuring the well-being of everyone that will exist rather than on making as many people exist as possible. The issue is not whether (far) future lives count as much as lives closer to the present. One can agree that future lives count equally, and also agree that far future considerations dominate the moral calculation (empirical claims enter the picture here). But one may disagree on "Omelas and Space Colonization", i.e. on how many lives worth living are needed to "outweigh" or "compensate for" miserable ones (which our future existence will inevitably also produce, probably in astronomical numbers, assuming astronomical population expansion). So it's possible to agree that future lives count equally and that far future considerations dominate but to still disagree on the importance of x-risk reduction or more particular things such as space colonization.

Comment author: Adriano_Mannino 13 July 2013 03:10:21AM 10 points [-]

Thanks, Luke, great overview! Just one thought:

Many EAs value future people roughly as much as currently-living people, and therefore think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the far future

This suggests that alternative views are necessarily based on ethical time preference (and time preference seems irrational indeed). But that's incorrect. It's possible to care about the well-being of everyone equally (no matter their spatio-temporal coordinates) without wanting to fill the universe with happy people. I think there is something true and important about the slogan "Make people happy, not happy people", although explaining that something is non-trivial.

Comment author: CarlShulman 07 April 2013 07:37:23AM *  4 points [-]

I must say I am very impressed by this kind of negative-utilitarian reasoning, as it has captured a concern of mine that I once naively assumed to be unquantifiable by utilitarian ethics

Do you mean that given certain comparisons of outcomes A and B, you agree with its ranking? Or that it captures your reasons? The latter seems dubious, unless you mean you buy negative utilitarianism wholesale.

If you don't care about anything good, then you don't have to worry about accepting smaller bads to achieve larger goods, but that goes far beyond "throwing out the baby with the bathwater." Toby Ord gives some of the usual counterexamples.

If you're concerned about deontological tradeoffs as in those stories, a negative utilitarian of that stripe would eagerly torture any finite number of people if that would kill a sufficiently larger population that suffers even occasional minor pains in lives that are overall quite good.

Comment author: Adriano_Mannino 07 April 2013 02:21:07PM *  7 points [-]

The "occasional minor pains" example is problematic because it brings in the question of aggregation too - and respective problems are not specific to NU. If NUs have to claim that sufficiently many minor pains are worse than torture, then that holds for CUs too. So the crucial issue is whether the non-existence of pleasure poses any problem or not, and whether the idea of pleasure "outweighing" pain that occurs elsewhere in space-time makes sense or not.

It's clear what's problematic about a decision to turn rocks into suffering - it's a problem for the resulting consciousness-moments. On the other hand, it's not clear at all what should be problematic about a decision not to turn rocks into happiness. In fact, if you do away with the idea that non-existence poses a problem, then the NU implications are perfectly intuitive.

Regarding Ord's intuitive counterexamples: It's unclear what their epistemic value is; and if there is any to them, CU seems to be subject to counterexamples that many would deem even worse. How many people would go along with the claim that perfect altruists would torture any finite number of people if that would turn a sufficient number of rocks into "muzak and potatoes" (cf. Ord) consciousness-seconds? As for "making everyone worse off": Take a finite population of people experiencing superpleasure only; now torture them all; add any finite number of tortured people; and add a sufficiently large number of people with lives barely worth living (i.e.: one more pinprick and non-existence would be better). - Done. And this makes you a good altruist according to CU.

Comment author: Benito 28 January 2013 06:03:09PM 1 point [-]

What if what they really want, deep down, is a sense of importance or social interaction or whatnot?

This sounds a bit like religious people saying "But what if it turns out that there is no morality? That would be bad!". What part of you thinks that this is bad? Because, that is what CEV is extrapolating. CEV is taking the deepest and most important values we have, and figuring out what to do next. You in principle couldn't care about anything else.

If human values wanted to self-modify, then CEV would recognise this. CEV wants to do what we want most, and this we call 'right'.

The only non-arbitrary "we" is the community of all minds/consciousnesses.

This is what you value, what you chose. Don't lose sight of invisible frameworks. If we're including all decision procedures, then why not computers too? This is part of the human intuition of 'fairness' and 'equality' too. Not the hamster's one.

Comment author: Adriano_Mannino 30 January 2013 12:10:31PM *  1 point [-]

It would indeed be bad (objectively, for the world) if, deep down, we did not really care about the well-being of all sentience. By definition, there will then be some sentience that ends up having a worse life than it could have had. This is an objective matter.

Yes, it is what I value, but not just. The thing is that if you're a non-utilitarian, your values don't correspond to the value/s there is/are in the world. If we're working for CEV, we seem to be engaged in an attempt to make our values correspond to the value/s in the world. If so, we're probably wrong with CEV.

Comment author: fubarobfusco 29 January 2013 03:10:49PM -1 points [-]

Of course we can morally compare types A and B, just as we can morally compare an AI whose goal is to turn the world into paperclips and one whose goal is to make people happy.

However, rather than "objectively better", we could be more clear by saying "more in line with our morals" or some such. It's not as if our morals came from nowhere, after all.

See also: "The Bedrock of Morality: Arbitrary?"

Comment author: Adriano_Mannino 30 January 2013 11:18:07AM *  0 points [-]

I don't think we'd be more clear by saying this, I think we'd be (at least partially) wrong.

Let's compare two worlds: World1 contains a population of pigs that are all constantly superhappy. World2 contains a population of pigs that are all constantly supermiserable. Clearly, World1 is objectively better than World2. If some morals deny this, they are wrong.

Things can only be good/bad for conscious beings (not for rocks, e.g.). So insofar the world takes the form of consciousness that gets what's good for it, it's objectively the case that something good has occurred in/for the world.

Comment author: see 28 January 2013 05:57:58PM 1 point [-]

Constructing an ethics that demands that a chicken act as a moral agent is obviously nonsense; chickens can't and won't act that way. Similarly, constructing an ethics that demands humans value chickens as much as they value their own children is nonsense; humans can't and won't act that way. If you're constructing an ethics for humans follow, you have to start by figuring out humans.

It's not until after you've figured out how much humans should value the interests of chickens that you can determine how much to weigh the interests of chickens in how humans should act. And how much humans should weigh the value of chickens is by necessity determined by what humans are.

Comment author: Adriano_Mannino 29 January 2013 11:00:54AM *  4 points [-]

Well, if humans can't and won't act that way, too bad for them! We should not model ethics after the inclinations of a particular type of agent, but we should instead try and modify all agents according to ethics.

If we did model ethics after particular types of agent, here's what would result: Suppose it turns out that type A agents are sadistic racists. So what they should do is put sadistic racism into practice. Type B agents, on the other hand, are compassionate anti-racists. So what they should do is diametrically opposed to what type A agents should do. And we can't morally compare types A and B.

But type B is obviously objectively better, and objectively less of a jerk. (Whether type A agents can be rationally motivated (or modified so as) to become more B-like is a different question.)

Comment author: aelephant 27 January 2013 11:59:21PM 1 point [-]

You could reduce human suffering to 0 by reducing the number of humans to 0, so there's got to be another value greater than reducing suffering.

It seems plausible to me that suffering could serve some useful purpose & eliminating it (or seeking to eliminate it) might have horrific consequences.

Comment author: Adriano_Mannino 28 January 2013 01:25:05PM 5 points [-]

Why are you so certain that a population of 0 would be a problem? In fact, there'd be no one for whom it would (could!) be a problem; no one whose values could rate that state of affairs as bad. Would it be a problem if no form of consciousness had ever come into existence? Why would that be problematic?

View more: Prev | Next