As per title. I often talk to people that have views that I think should straightforwardly imply a larger focus on s-risk than they think. In particular, people often seem to endorse something like a rough symmetry between the goodness of good stuff and the badness of bad stuff, sometimes referring to this short post that offers some arguments in that direction. I'm confused by this and wanted to quickly jot down my thoughts - I won't try to make them rigorous and make various guesses for what additional assumptions people usually make. I might be wrong about those.

 

Views that IMO imply putting more weight on s-risk reduction:

  1. Complexity of values: Some people think that the most valuable things possible are probably fairly complex (e.g. a mix of meaning, friendship, happiness, love, child-rearing, beauty etc.) instead of really simple (e.g. rats on heroin, what people usually imagine when hearing hedonic shockwave.) People also often have different views on what's good. I think people who believe in complexity of values often nonetheless think suffering is fairly simple, e.g. extreme pain seems simple and also just extremely bad. (Some people think that the worst suffering is also complex and they are excluded from this argument.) On first pass, it seems very plausible that complex values are much less energy-efficient than suffering. (In fact, people commonly define complexity by computational complexity, which translates directly to energy-efficiency.) To the extent that this is true, this should increase our concern about the worst futures relative to the best futures, because the worst futures could be much worse than the best futures.

    (The same point is made in more detail here.)
     
  2. Moral uncertainty: I think it's fairly rare for people to think the best happiness is much better than worst suffering is bad. I think people often have a mode at "they are the same in magnitude" and then uncertainty towards "the worst suffering is worse". If that is so, you should be marginally more worried about the worst futures relative to the best futures. The case for this is more robust if you incorporate other people's views into your uncertainty: I think it's extremely rare to have an asymmetric distribution towards thinking the best happiness is better in expectation.[1]

    (Weakly related point here.)
  3. Caring about preference satisfaction: I feel much less strongly about this one because thinking about the preferences of future people is strange and confusing. However, I think if you care strongly about preferences, a reasonable starting point is anti-frustrationism, i.e. caring about unsatisfied preferences but not caring about satisfied preferences of future people. That's because otherwise you might end up thinking, for example, that it's ideal to create lots of people who crave green cubes and give them lots of green cubes. I at least find that outcome a bit bizarre. It also seems asymmetric: Creating people who crave green cubes and not giving them green cubes does seem bad. Again, if this is so, you should marginally weigh futures with lots of dissatisfied people more than futures with lots of satisfied people.
    To be clear, there are many alternative views, possible ways around this etc. Taking into account preferences of non-existent people is extremely confusing! But I think this might be an underappreciated problem that people who mostly care about preferences need to find some way around if they don't want to weigh futures with dissatisfied people more highly.

I think point 1 is the most important because many people have intuitions around complexity of value. None of these points imply that you should focus on s-risk. However, they are arguments towards weighing s-risk higher. I wanted to put them out there because people often bring up "symmetry of value and disvalue" as a reason they don't focus on s-risk.

  1. ^

    There's also moral uncertainty 2.0: People tend to disagree more about what's most valuable than they disagree about what's bad. For example, some people think only happiness matters and others think justice, diversity etc. also matter. Roughly everybody thinks suffering is bad. You might think a reasonable way to aggregate is to focus more on reducing suffering, which everyone agrees on, at least whenever most efficiently increasing happiness trades off with justice or diversity.

New Comment
18 comments, sorted by Click to highlight new comments since:
[-]Ann42

Reminded me of "All happy families are alike; each unhappy family is unhappy in its own way."

I'm unsure it's true that "roughly everyone thinks suffering is bad". In the simplified/truism form maybe, but if you look at, for example, Christian theology, there's proposed utility to suffering in the ultimate effect it has on you; i.e., the most desirable states of yourself cannot be reached without also having suffering in the path.

[-]Signer20

I think it’s extremely rare to have an asymmetric distribution towards thinking the best happiness is better in expectation.

In a survey from SSC I counted ~10% of answers that preferred <50% probability of heaven vs hell to certainty of oblivion. 10% is not "extremely rare".

Whoa, I didn't know about this survey, pretty cool! Interesting results overall.

It's notable that 6% of people also report they'd prefer absolute certainty of hell over not existing, which seems totally insane from the point of view of my preferences. The 11% that prefer a trillion miserable sentient beings over a million happy sentient beings also seems wild to me. (Those two questions are also relatively more correlated than the other questions.)

[-]Dagon2-2

Point 2 lets me out. I suspect complex positive is a larger mass of value than simple positive or negative have, and I think a lot of complex negative is missing from the universe.

I think I'm at least close to agreeing, but even if it's like this now, it doesn't mean that the complex-positive-value-optimizer can produce more value mass than simple-negative-value-optimizer.

Why do you think/suspect that?

[-]Dagon20

Mostly intuition and introspection of complex positives I've experienced (joy, satisfaction, optimism) being far more durable than the simple positives and negatives, and even the "complex" negatives of depression and worry tend not to make overall life negative value.

Thanks for answering. I would personally expect this intuition and introspection to be sensitive to contingent factors like the range of experiences you've had, would you agree?

Personally my view leans more in the other direction, although it's possible I'm losing something in misunderstanding the complexity variable.

If my life experience leads me the view that 'suffering is worse than wellbeing is good', and your life experiences lend towards the opposite view, should those two data points be given equal weight? I personally would give more weight to accounts of the badness of suffering, because I see a fundamental asymmetry there, but would you say that's a product of bias given to my set of experiences?

If I were to be offered 300 years of overwhelmingly positive complex life in exchange for another ten years of severe anhedonic depression, I would not accept that offer. It wouldn't even be a difficult choice.

Assuming you would accept that offer for yourself, would you accept that offer on behalf of someone else?

[-]Dagon2-2

I mean, at it's root, value is personal and incomparable.  There's no basis for claiming any given valuation applies outside the evaluator's belief/preference set.  As embedded agents, our views are contingent on our experiences, and there is no single truth to this question.  That said, my beliefs resonate much more strongly with me than your description, so if you insist on having a single unified view, I'm going to weight mine higher.

That said, there is some (weak) evidence that those who claim suffering is more bad than joy/hope is good are confused about their weightings, as applied to themselves.  The rate of suicide is really quite low.  You ARE being offered the choice between an unknown length of continued experiences, and cessation of such.  

The rate of suicide is really quite low. You ARE being offered the choice between an unknown length of continued experiences, and cessation of such.

I think the expected value of the rest of my life is positive (I am currently pretty happy), especially considering impacts external to my own consciousness. If that stops being the case, I have the option.

There's also strong evolutionary reasons to expect suicide rates to not properly reflect the balance of qualia.

As embedded agents, our views are contingent on our experiences, and there is no single truth to this question.

It's hard to know exactly what this is implying. Sure it's based on personal experience that's difficult to extrapolate and aggregate etc. But I think it's a very important question. Potentially the most important question. Worth some serious consideration.

People are constantly making decisions based on the their marginal valuations of suffering and wellbeing, and the respective depths and heights of each end of the spectrum. These decisions can/do have massive ramifications.

So I can try to understand your view better, would you choose to spend one year in the worst possible hell if it meant you got to spend the next year in the greatest possible heaven?

Given my understanding of your expressed views, you would accept this offer. If I'm wrong about that, knowing that would help with my understanding of the topic. If you think it's an incoherent question, that would also improve my understanding.

Feel free to disengage, I just find limited opportunities to discuss this. If anyone else has anything to contribute I'd be happy to hear it.

[-]Dagon20

There's also strong evolutionary reasons to expect suicide rates to not properly reflect the balance of qualia.

Sure, much as there are strong cultural/signaling reasons to expect people to overestimate pain and underestimate pleasure values.  I mean, none of this is in the territory, it's all filtered through brains, in different and unmeasurable ways.

Sure it's based on personal experience that's difficult to extrapolate and aggregate etc.

Not difficult.  Impossible and meaningless to extrapolate or aggregate.  I suspect this is the crux of my disagreement with most utilitarian-like frameworks.

Would you spend a year in the worst possible hell in exchange for a year in the greatest possible heaven?

[-]Dagon20

I think so. I can’t really extrapolate such extremes, but it sounds preferable to two years of undistinguished existence.

I’m more confident that I’d spend a year as a bottom-5% happy human in order to get a year in the top-5%. I think, but it’s difficult to really predict, that I’d prefer the variance over two years at the median.

None of these are actual choices, of course. So I’m skeptical of using these guesses for anything important.

Interesting. It is an abstract hypothetical, but I do think it's useful, and it reveals something about how far apart we are in our intuitions/priors.

I wouldn't choose to live a year in the worst possible hell for 1000 years in the greatest possible heaven. I don't think I would even take the deal in exchange for an infinite amount of time in the greatest possible heaven.

I would conclude that the experience of certain kinds of suffering reveals something significant about the nature of consciousness that can't be easily inferred, if it can be inferred at all.

I’m more confident that I’d spend a year as a bottom-5% happy human in order to get a year in the top-5%

I would guess that the difference between .001 percentile happy and 5th percentile happy is larger than the difference between the 5th percentile and 100th percentile. So in that sense it's difficult for me to consider that question.

None of these are actual choices, of course. So I’m skeptical of using these guesses for anything important

I think even if they're abstract semi-coherant questions they're very revealing, and I think they're very relevant to prioritization of s-risks, allocating resources, and issues such as animal welfare.

It makes it easier for me to understand how otherwise reasonable seeming people can display a kind of indifference to the state of animal agriculture. If someone isn't aware of the extent of possible suffering, I can see why they might not view the issue with the same urgency.

[-]Dagon20

it reveals something about how far apart we are in our intuitions/priors.

Indeed!  And it says something about EITHER the unreliability of intuitions beyond run-of-the-mill situations, or about the insane variance in utility functions across people (and likely time).  Or both.  Really makes for an uncertain basis of any sort of reasoning or decision-making.

I would guess that the difference between .001 percentile happy and 5th percentile happy is larger than the difference between the 5th percentile and 100th percentile.

Wait, what?  My guess is exactly the opposite - something like a logistic curve (X being the valence of experience, Y being the valuation), so there's a huge difference toward the middle or when changing sign, but only minor changes in value toward the tails. 

Once again, intuitions are a sketchy thing.   In fact, I should acknowledge that I'm well beyond intuition here - I just don't HAVE intuitions at this level of abstraction.  This is my attempt to reconcile my very sparse and untrustworthy intuition samples with some intellectual preferences for regularity.  My intuitions are compatible with my belief in declining marginal value, but don't really specify the rest of the shape.  It could easily be closer to a pure logarithm - X axis from 0 (absolute worst possible experience) to infinity (progressively better experiences with no upper limit), with simple declining marginal value.

And it says something about EITHER the unreliability of intuitions beyond run-of-the-mill situations, or about the insane variance in utility functions across people (and likely time)

I don't think it's really all that complicated, I suspect that you haven't experienced a certain extent of negative valence which would be sufficient to update you towards understanding how bad suffering can get.

It would be like if you've never smelled anything worse than a fart, and you're trying to gauge the mass of value of positive smells against the mass of value of negative smells. If you were trying to estimate what it would be like in a small room full of decaying dead animals and ammonia, or how long you'd willingly stay in that room, your intuitions would completely fail you.

but only minor changes in value toward the tails.

I have experienced qualia that is just slightly net negative, feeling like non-existence would be preferable all else equal. Then I've experienced states of qualia that are immensely worse than that. The distance between those two states is certainly far greater than the distance between neutral and extreme pleasure/fulfillment/euphoria etc. Suffering can just keep getting worse and worse far beyond the point at which all you can desire is to cease existing.

[-]Dagon20

Yeah, I think I'm bowing out at this point.  I don't disagree that my suffering has been pretty minor in the scheme of things, but that's kind of my whole point: everyone's range of experiences is unique and incommunicable.  Or at least mine is.  

[-]Shiroe10

It's not easy to see the argument for treating your vales as incomparable with the values of other people, but seeing your future self's values as identical to your own. Unless you've adopted some idea of a personal soul.