You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MathiasZaman comments on Open thread, Oct. 12 - Oct. 18, 2015 - Less Wrong Discussion

5 Post author: MrMind 12 October 2015 06:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (250)

You are viewing a single comment's thread. Show more comments above.

Comment author: MathiasZaman 13 October 2015 11:26:34AM 4 points [-]

Does the story actually says the Superhappies really know humanity's utility function better? As in, does an omniscient narrator tell it, or is it a Superhappy or one of the crew that says this? That changes a lot, to me. Of course the Superhappies would believe they know our utility function better than we do. Just like how the humans assumed they knew what was better for the Babyeaters.

Similarly, the Superhappies are moral, for their idea of morality. They were perfectly willing to use force (not physical, but force nonetheless) to encourage humans to see their point of view. They threatened humanity and were willing to forcibly change human children, even if the adults could continue to feel pain. While humans also employs threats and force to change behavior, in most cases we would be hard-pressed to call that "moral."

From a meta-perspective, I'd findit odd if Yudkowsky wrote it like that. He's not careless enough to make that mistake and as far as I know, he thinks humanity's utility function goes beyond mere bliss.

The only way I think you could see the Superhappies' solution as acceptable if you don't think jokes or fiction (or other sort of arts involving "deception") are something humans would value as part of their utility function. Which I personally would find very hard to understand.

Comment author: cousin_it 13 October 2015 12:16:22PM *  0 points [-]

The only way I think you could see the Superhappies' solution as acceptable if you don't think jokes or fiction (or other sort of arts involving "deception") are something humans would value as part of their utility function.

Um, that's the opposite of how utility functions work. They don't have sacred components. You can and should trade off one component for a larger gain in another component. That's exactly what the super happies were offering.

Comment author: MathiasZaman 13 October 2015 01:05:37PM 2 points [-]

What I'm saying is that humans aren't wrong in trading off some amount of comfort so they can have jokes, fiction, art and romantic love.

Comment author: jsteinhardt 13 October 2015 01:25:50PM 1 point [-]

What why would this be true? Utility functions don't have to be linear, it could even be the case that I place no additional utility on happiness beyond a certain level.

Comment deleted 13 October 2015 02:32:34PM *  [-]
Comment author: OrphanWilde 13 October 2015 02:44:34PM 3 points [-]

the question in the story is whether total cost of suffering > total benefit from being able to suffer

The answer to this question is "No."

do you think the current amount of suffering is coincidentally exactly optimal, or would you prefer to add some more?

Some people could use more. Many others could use less.

The question you should ask first is whether being able to suffer is a good thing or a bad thing. You start with the assumption that it is bad, that suffering is bad. You do not sufficiently investigate what the alternative is; you do not sufficiently consider that experience is subjective, and subjectivity requires reference points. To eliminate, in perpetuity, that half of the axis below the current reference point, is to eliminate the axis entirely.

Comment author: [deleted] 14 October 2015 06:10:23AM 0 points [-]

The answer to this question is "No."

Do you have a proof for this? As far as I know, we have no universally agreed upon way to compare different ways of calculating utility.

Comment author: OrphanWilde 14 October 2015 01:05:15PM 2 points [-]

There's no way of calculating utility, period. The issue is more substantively that suffering is relative, and that the elimination of suffering is also the elimination of happiness.

Comment author: polymathwannabe 14 October 2015 01:15:10PM -1 points [-]

the elimination of suffering is also the elimination of happiness

Please explain in more detail. The Buddhist part of my brain just had a spit-take upon reading that.

Comment author: OrphanWilde 14 October 2015 01:25:16PM 0 points [-]

Happiness and suffering are the same thing - the experience of a divergence from the norm of your well-being, your ground state. They just differ in direction.

A long time ago, I experienced both. For most of my life, I experienced neither - you think pain is a negative experience, I found it to be an -interesting- experience, a diversion from the endless gray. Today, I experience... a very limited degree of both, as a result of gradually accepting that suffering is the cost paid to experience happiness.

Equanimity, as it transpires, isn't something you can experience only with regard to those things you don't want to directly experience.

Comment author: polymathwannabe 14 October 2015 01:47:51PM -1 points [-]

True, the difference is the direction, but surely that counts for something? Pain and pleasure are chemically and neurologically different phenomena. A ground state of "endless gray" is not something you'd really want.

suffering is the cost paid to experience happiness

I'm guessing you may be a Roman Catholic. In case you're not, how did you come to see suffering as having exchange value?

I hope my comments are not taken as offensive. I know I sometimes tend to dramatize my degree of surprise. I genuinely wish to understand your position.