Multiheaded comments on [Link] Values Spreading is Often More Important than Extinction Risk - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (21)
Yeah, I linked to Tomasik's earlier musings on this a while back in a comment.
I must say I am very impressed by this kind of negative-utilitarian reasoning, as it has captured a concern of mine that I once naively assumed to be unquantifiable by utilitarian ethics. There might be many plausible future worlds where scenarios like "Omelas" or "SCP-231" would be the norm, possibly with (trans)humanity acquiescing to them or perpetuating them for a rational reason.
What's worse, such futures might not even be acknowledged as disastrous/Unfriendly by people contemplating the perspective. Consider the perspective of transhuman values simply diverging so widely that some groups in a would-be "libertarian utopia" would perpetrate things (to their own unwilling members or other sentinents) which the rest of us would find abhorrent - yet the only way to influence such groups could be by aggression and total non-cooperation. Which might not be viable for the objecting factions due to game-theoretical reasons (avoiding a "cascade of defection"), ideological motives or an insufficient capability to project military force. See Three Worlds Collide for some ways this might plausibly play out.
Brian is, so far, the only utilitarian thinker I've read who even mentions Omelas as a potential grave problem, along with more standard transhumanist concerns such as em slavery or "suffering subroutines". I agree with the implications that he draws. I would further add that an excessive focus on reducing X-risk (and, indeed, on ensuring security and safety of all kinds) could have very scary present-day political implications, not just future ones.
(Which is why I am so worried and outspoken about the growth of a certain socio-political ideology among transhumanists and tech geeks; X-risk even features in some of the arguments for it that I've read - although much of it can be safely dismissed as self-serving fearmongering and incoherent apocalyptic fantasies.)
Do you mean that given certain comparisons of outcomes A and B, you agree with its ranking? Or that it captures your reasons? The latter seems dubious, unless you mean you buy negative utilitarianism wholesale.
If you don't care about anything good, then you don't have to worry about accepting smaller bads to achieve larger goods, but that goes far beyond "throwing out the baby with the bathwater." Toby Ord gives some of the usual counterexamples.
If you're concerned about deontological tradeoffs as in those stories, a negative utilitarian of that stripe would eagerly torture any finite number of people if that would kill a sufficiently larger population that suffers even occasional minor pains in lives that are overall quite good.
The "occasional minor pains" example is problematic because it brings in the question of aggregation too - and respective problems are not specific to NU. If NUs have to claim that sufficiently many minor pains are worse than torture, then that holds for CUs too. So the crucial issue is whether the non-existence of pleasure poses any problem or not, and whether the idea of pleasure "outweighing" pain that occurs elsewhere in space-time makes sense or not.
It's clear what's problematic about a decision to turn rocks into suffering - it's a problem for the resulting consciousness-moments. On the other hand, it's not clear at all what should be problematic about a decision not to turn rocks into happiness. In fact, if you do away with the idea that non-existence poses a problem, then the NU implications are perfectly intuitive.
Regarding Ord's intuitive counterexamples: It's unclear what their epistemic value is; and if there is any to them, CU seems to be subject to counterexamples that many would deem even worse. How many people would go along with the claim that perfect altruists would torture any finite number of people if that would turn a sufficient number of rocks into "muzak and potatoes" (cf. Ord) consciousness-seconds? As for "making everyone worse off": Take a finite population of people experiencing superpleasure only; now torture them all; add any finite number of tortured people; and add a sufficiently large number of people with lives barely worth living (i.e.: one more pinprick and non-existence would be better). - Done. And this makes you a good altruist according to CU.
This seems to presuppose "good" being synonymous with "pleasurable conscious states". Referring to broader (and less question-begging) definitions for "good" like e.g. "whatever states of the world I want to bring about" or "whatever is in accordance with other-regarding reasons for actions", negative utilitarians would simply deny that pleasurable consciousness-states fulfill the criterion (or that they fulfill it better than non-existence or hedonically neutral flow-states).
Ord concludes that negative utilitarianism leads to outcomes where "everyone is worse off", but this of course also presupposes an axiology that negative utilitarians would reject. Likewise, it wouldn't be a fair criticism of classical utilitarianism to say that the very repugnant conclusion leaves everyone worse off (even though from a negative or prior-existence kind of perspective it seems like it), because at least according to the classical utilitarians themselves, existing slightly above "worth living" is judged better than non-existence.