cousin_it comments on Open thread, Oct. 12 - Oct. 18, 2015 - Less Wrong

5 Post author: MrMind 12 October 2015 06:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (250)

You are viewing a single comment's thread.

Comment author: cousin_it 13 October 2015 10:40:40AM *  6 points [-]

I was just rereading Three Worlds Collide today and noticed that my feelings about the ending have changed over the last few years. It used to be obvious to me that the "status quo" ending was better. Now I feel that the "super happy" ending is better, and it's not just a matter of feelings - it's somehow axiomatically better, based on what I know about decision theory.

Namely, the story says that the super happies are smarter and understand humanity's utility function better, and also that they are moral and wouldn't offer a deal unless it was beneficial according to both utility functions being merged (not just according to their value of happiness). Under these conditions, accepting the deal seems like the right thing to do.

Comment author: MathiasZaman 13 October 2015 11:26:34AM 4 points [-]

Does the story actually says the Superhappies really know humanity's utility function better? As in, does an omniscient narrator tell it, or is it a Superhappy or one of the crew that says this? That changes a lot, to me. Of course the Superhappies would believe they know our utility function better than we do. Just like how the humans assumed they knew what was better for the Babyeaters.

Similarly, the Superhappies are moral, for their idea of morality. They were perfectly willing to use force (not physical, but force nonetheless) to encourage humans to see their point of view. They threatened humanity and were willing to forcibly change human children, even if the adults could continue to feel pain. While humans also employs threats and force to change behavior, in most cases we would be hard-pressed to call that "moral."

From a meta-perspective, I'd findit odd if Yudkowsky wrote it like that. He's not careless enough to make that mistake and as far as I know, he thinks humanity's utility function goes beyond mere bliss.

The only way I think you could see the Superhappies' solution as acceptable if you don't think jokes or fiction (or other sort of arts involving "deception") are something humans would value as part of their utility function. Which I personally would find very hard to understand.

Comment author: cousin_it 13 October 2015 12:16:22PM *  0 points [-]

The only way I think you could see the Superhappies' solution as acceptable if you don't think jokes or fiction (or other sort of arts involving "deception") are something humans would value as part of their utility function.

Um, that's the opposite of how utility functions work. They don't have sacred components. You can and should trade off one component for a larger gain in another component. That's exactly what the super happies were offering.

Comment author: MathiasZaman 13 October 2015 01:05:37PM 2 points [-]

What I'm saying is that humans aren't wrong in trading off some amount of comfort so they can have jokes, fiction, art and romantic love.

Comment author: jsteinhardt 13 October 2015 01:25:50PM 1 point [-]

What why would this be true? Utility functions don't have to be linear, it could even be the case that I place no additional utility on happiness beyond a certain level.

Comment deleted 13 October 2015 02:32:34PM *  [-]
Comment author: OrphanWilde 13 October 2015 02:44:34PM 3 points [-]

the question in the story is whether total cost of suffering > total benefit from being able to suffer

The answer to this question is "No."

do you think the current amount of suffering is coincidentally exactly optimal, or would you prefer to add some more?

Some people could use more. Many others could use less.

The question you should ask first is whether being able to suffer is a good thing or a bad thing. You start with the assumption that it is bad, that suffering is bad. You do not sufficiently investigate what the alternative is; you do not sufficiently consider that experience is subjective, and subjectivity requires reference points. To eliminate, in perpetuity, that half of the axis below the current reference point, is to eliminate the axis entirely.

Comment author: [deleted] 14 October 2015 06:10:23AM 0 points [-]

The answer to this question is "No."

Do you have a proof for this? As far as I know, we have no universally agreed upon way to compare different ways of calculating utility.

Comment author: OrphanWilde 14 October 2015 01:05:15PM 2 points [-]

There's no way of calculating utility, period. The issue is more substantively that suffering is relative, and that the elimination of suffering is also the elimination of happiness.

Comment author: polymathwannabe 14 October 2015 01:15:10PM -1 points [-]

the elimination of suffering is also the elimination of happiness

Please explain in more detail. The Buddhist part of my brain just had a spit-take upon reading that.

Comment author: OrphanWilde 14 October 2015 01:25:16PM 0 points [-]

Happiness and suffering are the same thing - the experience of a divergence from the norm of your well-being, your ground state. They just differ in direction.

A long time ago, I experienced both. For most of my life, I experienced neither - you think pain is a negative experience, I found it to be an -interesting- experience, a diversion from the endless gray. Today, I experience... a very limited degree of both, as a result of gradually accepting that suffering is the cost paid to experience happiness.

Equanimity, as it transpires, isn't something you can experience only with regard to those things you don't want to directly experience.

Comment author: RomeoStevens 14 October 2015 02:03:24AM *  1 point [-]

My feeling is that many utility functions in the general class of utility functions that the super happy's is drawn from would lie about how advantageous it is to merge. Weren't the humans going to lie to the babyeaters?

Comment author: bogus 15 October 2015 01:03:37AM *  1 point [-]

I think what the "true" (status-quo) ending proves is that the Super-Happies did not accurately model humanity's utility function at all. If they had, they would have proposed a deal where humanity gets rid of most of its pain, but still keeps some, especially those "grim" things that humans actually like (somewhat counter-intuitively). (And perhaps the Babyeaters' thing would then be understood as one of these "grim" things by humans, as it clearly is for the Babyeaters themselves It's not clear if the Superhappies would be willing to acquire this value, though). This is a deal that humans would indeed accept, since it agrees with their values. I think the true moral of this story is that getting human wants right for something like CEV is a hard problem, and making even small mistakes can have big consequences.

Comment author: EE43026F 13 October 2015 10:20:32PM 1 point [-]

But it's still a compromise. Is it part of humanity's utility function to value another species' utility function to such an extent that they would accept the tradeoff of changing humanity's utility function to preserve as much of the other species' utility function?

I don't recall any mention of humanity being total utilitarians in the story. Neither did the compromise made by the superhappies strike me as being better for all parties than their original values were, for each of them.

The only reason the compromise was supposed to be beneficial is because the three species made contact and couldn't easily coexist together from that point on. Also, because the superhappies were the stronger force and could therefore easily enforce their own solution. Cutting off the link removes those assumptions, and allows each species to preserve its utility function, which I assume they have a preference for, at least humans and baby-eaters.

Comment author: Viliam 14 October 2015 07:57:36AM 1 point [-]

Cutting off the link (...) allows each species to preserve its utility function, which I assume they have a preference for, at least humans and baby-eaters.

There was an asymetry in the story, if I remember correctly.

Babyeaters had a preference for other species eating their babies. Humans and superhappies had a preference for other species not eating their babies. This part was symetrical. Superhappies also had a preference for other species never feeling any pain. But humans didn't have a preference for other species feeling pain; they just wanted to more or less preserve their own biological status quo. They didn't mind if superhappies remain... superhappy.

This is why cutting the link harms the superhappy utility function more than the human utility function. -- Humans will feel the relief that babyeater children are still saved by superhappies, more quickly and reliably than humans could do. On the other hand, superhappies will know that somewhere in the universe human babies are feeling pain and frustration, and there is nothing the superhappies can do about it.

The asymetry was that superhappies didn't seem ethically repulsive to humans. Well, apart from what they wanted to do with humans; which was successfully avoided.

Comment author: Pfft 14 October 2015 04:57:13PM 1 point [-]

In the story the superhappies propose to self-modify to appreciate complex art, not just simple porn, and they say that humans and babyeaters will both think that is an improvement. So to some degree the superhappies (with their very ugly spaceships) are repulsive to humans, although not as strongly repulsive as the babyeaters.

Comment author: Pfft 13 October 2015 05:19:25PM 0 points [-]

they are moral and wouldn't offer a deal unless it was beneficial according to both utility functions being merged (not just according to their value of happiness).

I guess whether it is beneficial or not depends on what you compare to? They say,

he obvious starting point upon which to build further negotiations, is to combine and compromise the utility functions of the three species until we mutually satisfice, providing compensation for all changes demanded.

So they are aiming for satisficing rather than maximizing utility: according to all three before-the-change moralities, the post-change state of affairs should be acceptable, but not necessarily optimal. Consider these possibilities:

1) Baby-eaters are modified to no longer eat sentient babies; humans are unchanged; Superhappies like art.

2) Baby-eaters are modified to no longer eat sentient babies; humans are pain-free and eat babies; Superhappies like art.

3) Baby-eaters, humans, and Superhappies are all unchanged.

I think the intention of the author is that, according to pre-change human morality, (1) is the optimal choice, (2) is bad but acceptable, and (3) is unacceptable. The superhappies in the story claim that (2) is the only alternative that is acceptable to all three pre-change moralities. So the super-happy ending is beneficial in the sense that it avoids (3), but it's a "bad" ending because it fails to get (1).

Comment author: cousin_it 13 October 2015 06:12:05PM 0 points [-]

Hmm, I guess I interpreted the super happies proposal differently, as saying that humans get compensation for any downgrade from (1) to (2).