BerryPick6 comments on Giving What We Can, 80,000 Hours, and Meta-Charity - Less Wrong

44 Post author: wdmacaskill 15 November 2012 08:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (182)

You are viewing a single comment's thread. Show more comments above.

Comment author: BerryPick6 04 December 2012 03:55:59PM 1 point [-]

Why should I overcome my "bias" and not save my own child, just because there is some other child with a better chance of being saved, but which I do not care about as much?

Assuming that saving my child would give me X utility and saving the other child would give his parents X utility, it's just a "shut up and multiply" kind of thing...

Comment author: Vladimir_Nesov 04 December 2012 04:06:58PM *  2 points [-]

Assuming that saving my child would give me X utility and saving the other child would give his parents X utility

This assumption is excluded by Kawoomba's "but which I do not care about as much", so isn't directly relevant at this point (unless you are making a distinction between "caring" and "utility", which should be more explicit).

Comment author: BerryPick6 04 December 2012 04:12:21PM 0 points [-]

I guess I'm just not sure why Kawoomba's own utility gets special treatment over the other child's parents utility function. Then again, your reply and my own sentence just now have me slightly confused, so I may need to think on this a bit more.

Comment author: Vladimir_Nesov 04 December 2012 04:24:04PM *  1 point [-]

I guess I'm just not sure why Kawoomba's own utility gets special treatment over the other child's parents utility function.

Taboo "utility function", and "Kawoomba cares about Kawoomba's utility function" would resolve into the tautologous "Kawoomba is motivated by whatever it is that motivates Kawoomba". The subtler problem is that it's not a given that Kawoomba knows what motivates Kawoomba, so claims with certainty about what that is or isn't (including those made by Kawoomba) may be unfounded. To the extent "utility function" refers to idealized extrapolated volition, rather than present desires, people won't already have good understanding of even their own "utility function".

Comment author: Kawoomba 04 December 2012 05:59:02PM -1 points [-]

The subtler problem is that it's not a given that Kawoomba knows what motivates Kawoomba, so claims with certainty about what that is or isn't (including those made by Kawoomba) may be unfounded.

There is no idealized extrapolated volition that is based on my current volition that would prefer someone else's child over one of my own (CEV_me, not CEV_mankind). There are certainly inconsistencies in my non-idealized utility function, but that does not mean that every statement I make about my own utility function must be suspect, merely that such suspect/contradictory statements exist.

If you prefer vanilla over strawberry ice cream, there may be cases where that preference does not transfer to your extrapolated volition due to some other contradictory preferences. However, for comparisons with a significant delta involved, the initial result that determines your decision should be preserved. (It may however be different when extrapolating to a CEV for all humankind.)

Also, you used my name with a frequency of 7/84 in your last comment <3.

Comment author: Vladimir_Nesov 04 December 2012 06:13:30PM 0 points [-]

that does not mean that every statement I make about my own utility function must be suspect

In general, unless something is well-understood, there is good reason to suspect an error. Human values is not something that's understood particularly well.

Comment author: Kawoomba 04 December 2012 06:20:34PM 0 points [-]

If you value e.g. your family extremely higher than a grain of salt, would you say that there is any chance of that not being reflected in your CEV?

Any "CEV" that doesn't conserve e.g. that particular relationship would be misnamed.

Comment author: thomblake 04 December 2012 04:11:39PM 1 point [-]

Assuming that saving my child would give me X utility and saving the other child would give his parents X utility

If you've found a way to aggregate utility across persons, I'd like to hear it.

Normally, we talk about trying to satisfy a particular utility function. If the parent values her child more than the neighbor's child, that is reflected in her utility function. What other standard are you trying to invoke?

Comment author: BerryPick6 04 December 2012 04:13:32PM 0 points [-]

Ah, this clears up things a bit for me, thank you.

Comment author: Kawoomba 04 December 2012 04:06:33PM 0 points [-]

Why would I need to aim to satisfy overall utility including others, as opposed to just that of my own family?

Is any such preference that chooses my own utility over that of others a bias, and not part of my utility function?

Is it an evil bias if I buy myself some tech toys as opposed to donating that amount to my preferred charity?

Comment author: BerryPick6 04 December 2012 04:09:31PM -2 points [-]

Why would I need to aim to satisfy overall utility including others, as opposed to just that of my own family?

What reason do you have for aiming to satisfy you own utility function, or that of your family's?

Is any such preference that chooses my own utility over that of others a bias, and not part of my utility function?

I'm afraid this is a little too much lingo for me. Sorry.

Is it an evil bias if I buy myself some tech toys as opposed to donating that amount to my preferred charity?

You'd have to taboo "evil" before I can answer this question.

Comment author: Kawoomba 04 December 2012 04:51:05PM 0 points [-]

What reason do you have for aiming to satisfy you own utility function

Um, it's my utility function, that which I aim to maximize and that which already incorporates my e.g. altruistic desires. Postulating "other preferences" that can overrule my utility function would be a contradiction in terms.

The other two questions were more aimed at MugaSofer, who was the one differentiating between preference as a "bias" and as part of your utility function, and who introduced the whole "evil" thing.

Comment author: Kindly 04 December 2012 06:32:07PM -1 points [-]

The nearest I can come to making sense of your claim is that it's some sort of imaginary Prisoner's Dilemma: you can cooperate by saving a random child instead of your own, and in symmetric cases other parents can cooperate by saving your child instead of theirs.

However, even if you are into counterfactual bargaining, I am pretty sure almost no other parent would cooperate here, which makes defecting a no-brainer.

I suppose to be fair I should imagine a world in which every parent is brainwashed into valuing other children's lives as much as their own (I am pretty sure it would take brainwashing). In this case (assuming you escaped the brainwashing so it's still a legitimate decision) saving the other child might be the right thing to do. At that point, though, you're arguably not optimizing for humans anymore.