Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Wei_Dai 22 June 2017 06:02:17PM *  0 points [-]

If we try to answer the question now, it seems very likely we'll get the answer wrong (given my state of uncertainty about the inputs that go into the question). I want to keep civilization going until we know better how to answer these types of questions. For example if we succeed in building a correctly designed/implemented Singleton FAI, it ought to be able to consider this question at leisure, and if it becomes clear that the existence of mature suffering-hating civilizations actually causes more suffering to be created, then it can decide to not make us into a mature suffering-hating civilization, or take whatever other action is appropriate.

Are you worried that by the time such an FAI (or whatever will control our civilization) figures out the answer, it will be too late? (Why? If we can decide that x-risk reduction is bad, then so can it. If it's too late to alter or end civilization at that point, why isn't it already too late for us?) Or are you worried more that the question won't be answered correctly by whatever will control our civilization?

Comment author: Lukas_Gloor 22 June 2017 07:26:21PM *  0 points [-]

Or are you worried more that the question won't be answered correctly by whatever will control our civilization?

Perhaps this, in case it turns out to be highly important but difficult to get certain ingredients – e.g. priors or decision theory – exactly right. (But I have no idea, it's also plausible that suboptimal designs could patch themselves well, get rescued somehow, or just have their goals changed without much fuss.)

Comment author: cousin_it 21 June 2017 07:07:13PM *  1 point [-]

Yeah, I also had the idea about utility being conjunctive and mentioned it in a deleted reply to Wei, but then realized that Eliezer's version (fragility of value) already exists and is better argued.

On the other hand, maybe the worst hellscapes can be prevented in one go, if we "just" solve the problem of consciousness and tell the AI what suffering means. We don't need all of human value for that. Hellscapes without suffering can also be pretty bad in terms of human value, but not quite as bad, I think. Of course solving consciousness is still a very tall order, but it might be easier than solving all philosophy that's required for FAI, and it can lead to other shortcuts like in my recent post (not that I'd propose them seriously).

Comment author: Lukas_Gloor 22 June 2017 08:43:44AM *  1 point [-]

Some people at MIRI might be thinking about this under nonperson predicate. (Eliezer's view on which computations matter morally is different from the one endorsed by Brian, though.) And maybe it's important to not limit FAI options too much by preventing mindcrime at all costs – if there are benefits against other very bad failure modes (or – cooperatively – just increased controllability for the people who care a lot about utopia-type outcomes), maybe some mindcrime in the early stages to ensure goal-alignment would be the lesser evil.

Comment author: casebash 22 June 2017 06:57:28AM *  2 points [-]

I think the reason why cousin_it's comment is upvoted so much is that a lot of people (including me) weren't really aware of S-risks or how bad they could be. It's one thing to just make a throwaway line that S-risks could be worse, but it's another thing entirely to put together a convincing argument.

Similar ideas have been in other articles, but they've framed it in terms of energy-efficiency while defining weird words such as computronium or the two-envelopes problem, which make it much less clear. I don't think I saw the links for either of those articles before, but if I had, I probably wouldn't have read them.

I also think that the title helps as well. S-risks is a catchy name, especially if you already know x-risks. I know that this term has been used before, but it wasn't used in the title. Further, while being quite a good article, you can read the summary, introduction and conclusion without encountering the idea that the author believes that s-risks are much greater than x-risks, as opposed to being just yet another risk to worry about.

I think there's definitely an important lesson to be drawn here. I wonder how many other articles have gotten close to an important truth, but just failed to hit it out fo the park for some reason or another.

Comment author: Lukas_Gloor 22 June 2017 08:26:02AM *  1 point [-]

Interesting!

Further, while being quite a good article, you can read the summary, introduction and conclusion without encountering the idea that the author believes that s-risks are much greater than x-risks, as opposed to being just yet another risk to worry about.

I'm only confident about endorsing this conclusion conditional on having values where reducing suffering matters a great deal more than promoting happiness. So we wrote the "Reducing risks of astronomical suffering" article in a deliberately 'balanced' way, pointing out the different perspectives. This is why it didn't come away making any very strong claims. I don't find the energy-efficiency point convincing at all, but for those who do, x-risks are likely (though not with very high confidence) still more important, mainly because more futures will be optimized for good outcomes rather than bad outcomes, and this is where most of the value is likely to come from. The "pit" around the FAI-peak is in expectation extremely bad compared to anything that exists currently, but most of it is just accidental suffering that is still comparatively unoptimized. So in the end, whether s-risks or x-risks are more important to work on on the margin depends on how suffering-focused or not someone's values are.

Having said that, I totally agree that more people should be concerned about s-risks and it's concerning that the article (and the one on suffering-focused AI safety) didn't manage to convey this point well.

Comment author: Lumifer 21 June 2017 02:10:00AM 3 points [-]

Facepalm was a severe understatement, this quote is a direct ticket to the loony bin. I recommend poking your head out of the bubble once in a while -- it's a whole world out there. For example, some horrible terrible no-good people -- like me -- consider factory farming to be an efficient way of producing a lot of food at reasonable cost.

This sentence reads approximately as "Literal genocide (e.g. Rwanda) is roughly as severe as using a masculine pronoun with respect to a nonspecific person, but with an even larger scope".

The steeliest steelman that I can come up with is that you're utterly out of touch with the Normies.

Comment author: Lukas_Gloor 21 June 2017 03:38:59AM *  4 points [-]

I sympathize with your feeling of alienation at the comment, and thanks for offering this perspective that seems outlandish to me. I don't think I agree with you re who the 'normies' are, but I suspect that this may not be a fruitful thing to even argue about.

Side note: I'm reminded of the discussion here. (It seems tricky to find a good way to point out that other people are presenting their normative views in a way that signals an unfair consensus, without getting into/accused of identify politics or having to throw around words like "loony bin" or fighting over who the 'normies' are.)

Comment author: Kaj_Sotala 20 June 2017 08:53:09PM 2 points [-]

especially given that suffering-focused ethics seems to somehow be connected with distrust of philosophical deliberation

Can you elaborate on what you mean by this? People like Brian or others at FRI don't seem particularly averse to philosophical deliberation to me...

This also seems like an attractive compromise more broadly: we all spend a bit of time thinking about s-risk reduction and taking the low-hanging fruit, and suffering-focused EAs do less stuff that tends to lead to the destruction of the world.

I support this compromise and agree not to destroy the world. :-)

Comment author: Lukas_Gloor 21 June 2017 03:14:49AM *  3 points [-]

Those of us who sympathize with suffering-focused ethics have an incentive to encourage others to think about their values now, at least in crudely enough terms to take a stance on prioritizing preventing s-risks vs. making sure we get to a position where everyone can safely deliberate their values further and then everything gets fulfilled. Conversely, if one (normatively!) thinks the downsides of bad futures are unlikely to be much worse than the upsides of good futures, then one is incentivized to promote caution about taking confident stances on anything population-ethics-related, and instead value deeper philosophical reflection. The latter also has the upside of being good from a cooperation point of view: Everyone can work on the same priority (building safe AI that helps with philosophical reflection) regardless of one's inklings about how personal value extrapolation is likely to turn out.

(The situation becomes more interesting/complicated for suffering-focused altruists once we add considerations of multiverse-wide compromise via coordinated decision-making, which, in extreme versions at least, would call for being "updateless" about the direction of one's own values.)

Comment author: fubarobfusco 20 June 2017 11:53:13PM 2 points [-]

The section presumes that the audience agrees wrt veganism. To an audience who isn't on board with EA veganism, that line comes across as the "arson, murder, and jaywalking" trope.

Comment author: Lukas_Gloor 21 June 2017 02:16:04AM 1 point [-]

A lot of people who disagree with veganism agree that factory farming is terrible. Like, more than 50% of the population I'd say.

Comment author: Lukas_Gloor 09 June 2017 01:33:52PM *  1 point [-]

But what happens when the low-intensity conversation and the brainwashing are the same thing?

That's definitely bad in cases where people explicitly care about goal preservation. But only self-proclaimed consequentialists do.

The other cases are more fuzzy. Memeplexes like rationality, EA/utilitarianism, religious fundamentalism, political activism, or Ayn Rand type stuff, are constantly 'radicalizing' people, turning them from something sort-of-agenty-but-not-really into self-proclaimed consequentialist agents. Whether that is in line with people's 'real' desires is to a large extent up for interpretation, though there are extreme cases where the answer seems clearly 'no.' Insofar as recruiting strategies are concerned, we can at least condemn propaganda and brain washing because they are negative-sum (but the lines might again be blurry).

It is interesting that people don't turn into self-proclaimed consequentialists on their own without the influence of 'aggressive' memes. This just goes to show that humans aren't agents by nature, and that an endeavor of "extrapolating your true consequentialist preferences" is at least partially about adding stuff that wasn't previously there rather than discovering something that was hidden. That might be fine, but we should be careful to not unquestioningly assume that this automatically qualifies as "doing people a favor." This, too, is up for interpretation to at least some extent. The argument for it being a favor is presented nicely here. The counterargument is that satisficers often seem pretty happy and who are we to maneuver them into a situation where they cannot escape their own goals and always live for the future instead of the now. (Technically people can just choose whichever consequentialist goal that is best fulfilled with satisficing, but I could imagine that many preference extrapolation processes are set up in a way that make this an unlikely outcome. For me at least, learning more about philosophy automatically closed some doors.)

Comment author: Lumifer 10 April 2017 06:01:21PM *  0 points [-]

other people are going to disagree with you or me

Of course, that's a given.

These are questions that the discipline of population ethics deals with

So is this discipline basically about ethics of imposing particular choices on other people (aka the "population")? That makes it basically the ethics of power or ethics of the ruler(s).

You also call it "morality as altruism", but I think there is a great deal of difference between having power to impose your own perceptions of "better" ("it's for your own good") and not having such power, being limited to offering suggestions and accepting that some/most will be rejected.

"morality as cooperation/contract" view

What happens with this view if you accept that diversity will exist and at least some other people will NOT follow the same principles? Simple game theory analyses in a monoculture environment are easy to do, but have very little relationship to real life.

whether we should be morally glad about the world as it currently exists

That looks to me like a continuous (and probably multidimensional) value. All moralities operate in terms of "should" an none find the world as it is to be perfect. This means that all contemplate the gap between "is" and "should be" and this gap can be seen as great or as not that significant.

whether e.g. we should make more worlds that are exactly like ours

Ask me when you acquire the capability :-)

Comment author: Lukas_Gloor 10 April 2017 06:42:01PM 2 points [-]

So is this discipline basically about ethics of imposing particular choices on other people (aka the "population")? That makes it basically the ethics of power or ethics of the ruler(s).

That's an interesting way to view it, but it seems accurate. Say God created the world, then contractualist ethics or ethics of cooperation didn't apply to him, but we'd get a sense of what his population ethical stance must have been.
No one ever gets asked whether they want to be born. This is one of the issues where there is no such thing as "not taking a stance;" how we act in our lifetimes is going to affect what sort of minds there will or won't be in the far future. We can discuss suggestions and try to come to a consensus of those currently in power, but future generations are indeed in a powerless position.

Comment author: Lumifer 10 April 2017 04:04:41PM 0 points [-]

"Natural selection made sure that even those beings in constant misery may not necessarily exhibit suicidal behavior."

Not sure this is the case. I would expect that natural selection made sure that no being is systematically in constant misery and so there is no need for the "but if you are in constant misery you can't suicide anyways" part.

Views on population ethics

I still don't understand what that means. Are you talking about believing that other people should have particular ethical views and it's bad if they don't?

No; I specifically said not to do that.

Well, the OP thinks it might be reasonable to kill everything with a nervous system because in his view all of them suffer too much. However if that is just an aesthetic judgement...

without the result being worse for everyone

Well, clearly not everyone since you will have winners and losers. And to evaluate this on the basis of some average/combined utility requires you to be a particular kind of utilitarian.

Comment author: Lukas_Gloor 10 April 2017 05:21:04PM *  2 points [-]

I still don't understand what that means. Are you talking about believing that other people should have particular ethical views and it's bad if they don't?

I'm trying to say that other people are going to disagree with you or me about how to assess whether a given life is worth continuing or worth bringing into existence (big difference according to some views!), and on how to rank populations that differ in size and the quality of the lives in them. These are questions that the discipline of population ethics deals with, and my point is that there's no right answer (and probably also no "safe" answer where you won't end up disagreeing with others).

This^^ is all about a "morality as altruism" view, where you contemplate what it means to "make the world better for other beings." I think this part is subjective.

There is also a very prominent "morality as cooperation/contract" view, where you contemplate the implications of decision algorithms correlating with each other, and notice that it might be a bad idea to adhere to principles that lead to outcomes worse for everyone in expectation provided that other people (in sufficiently similar situations) follow the same principles. This is where people start with whatever goals/preferences they have and derive reasons to be nice and civil to others (provided they are on an equal footing) from decision theory and stuff. I wholeheartedly agree with all of this and would even say it's "objective" – but I would call it something like "pragmatics for civil society" or maybe "decision theoretic reasons for cooperation" and not "morality," which is the term I reserve for (ways of) caring about the well-being of others.

It's pretty clearly apparent that "killing everyone on earth" is not in most people's interest, and I appreciate that people are pointing this out to the OP. However, I think what the replies are missing is that there is a second dimension, namely whether we should be morally glad about the world as it currently exists, and whether e.g. we should make more worlds that are exactly like ours, for the sake of the not-yet-born inhabitants of these new worlds. This is what I compared to voting on what the universe's playlist of experience moments should be like.

But I'm starting to dislike the analogy. Let's say that existing people have aesthetic preferences about how to allocate resources (this includes things like wanting to rebuild galaxies into a huge replica of Simpsons characters because it's cool), and of these, a subset are simultaneously also moral preferences in that they are motivated by a desire to do good for others, and these moral preferences can differ in whether they count it as important to bring about new happy beings or not, or how much extra happiness is needed to altruistically "compensate" (if that's even possible) for the harm of a given amount of suffering, etc. And the domain where people compare each others' moral preferences and try to see if they can get more convergence through arguments and intuition pumps, in the same sense as someone might start to appreciate Mozart more after studying music theory or whatever, is population ethics (or "moral axiology").

Comment author: Lumifer 10 April 2017 03:04:11PM *  2 points [-]

Maybe natural selection is quite like that scientist

The survival instinct part, very probably, but the "constant misery" part doesn't look likely.

Actually, I don't understand where the "animals have negative utility" thing is coming from. Sure, let's postulate that fish can feel pain. So what? How do you know that fish don't experience intense pleasure from feeling water stream by their sides?

I just don't see any reasonable basis for deciding what the utility balance for most animals looks like. And from the evolutionary standpoint the "constant misery" is nonsense -- constant stress is not conducive to survival.

fear of consequences in an afterlife

Are we talking about humans now? I thought the OP considered humans to be more or less fine, it's the animals that were the problem.

Does anyone claim that the net utility of humanity is negative?

“Is a life net positive in terms of all the experience moments it adds to the universe’s playlist?”

I have no idea what this means.

not an empirical question; it’s more of an aesthetic judgment

Ah. Well then, let's kill everyone who fails our aesthetic judgment..?

then some would claim I have an obligation ... and I’d be doing harm to their preferences

That's a very common attitude -- see e.g. attitudes to abortion, to optional wars, etc. However "paternalistic" implies an imbalance of power -- you can't be paternalistic to an equal.

Comment author: Lukas_Gloor 10 April 2017 03:38:23PM *  1 point [-]

The survival instinct part, very probably, but the "constant misery" part doesn't look likely.

Agree, I meant to use the analogy to argue for "Natural selection made sure that even those beings in constant misery may not necessarily exhibit suicidal behavior." (I do hold the view that animals in nature suffer a lot more than they are happy, but that doesn't follow from anything I wrote in the above post.)

Are we talking about humans now? I thought the OP considered humans to be more or less fine, it's the animals that were the problem.

Right, but I thought your argument about sentient beings not committing suicide refers to humans primarily. At least with regard to humans, exploring why the appeal to low suicide rates may not show much seems more challenging. Animals not killing themselves could just be due to them lacking the relevant mental concepts.

I have no idea what this means.

It's a metaphor. Views on population ethics reflect what we want the "playlist" of all the universe's experience moments to be like, and there's no objective sense of "net utility being positive" or not. Except when you question-beggingly define "net utility" in a way that implies a conclusion, but then anyone who disagrees will just say "I don't think we should define utility that way" and you're left arguing over the same differences. That's why I called it "aesthetic" even though that feels like it doesn't give the seriousness of our moral intuitions due justice.

Ah. Well then, let's kill everyone who fails our aesthetic judgment..?

(And force everyone to live against their will if they do conform to it?) No; I specifically said not to do that. Viewing morality as subjective is supposed to make people more appreciative that they cannot go around completely violating the preferences of those they disagree with without the result being worse for everyone.

View more: Next