Comment author: WhySpace 07 October 2016 09:42:13PM 0 points [-]

Perhaps I should have been more specific than to use a vague term like "morality". Replace it with CEV, since that should be the sum total of all your values.

Most people value happiness, so let me use that as an example. Even if I value own happiness 1000x more than other people's happiness, if there are more than 1000 people in the word, then the vast majority of my concern for happiness is still external to myself. One could do this same calculation for all other values, and add them up to get CEV, which is likely to be weighted toward others for the same reason that happiness is.

Of course, perhaps some people legitimately would prefer 3^^^3 dust specs in people's eyes to their own death. And perhaps some people's values aren't coherent, such as preferring A to B, B to C, and C to A. But if neither of these is the case, then replacing one's self with a more efficient agent maximizing the same values should be a net gain in most cases.

Comment author: DanArmak 08 October 2016 01:01:09AM *  0 points [-]

I don't believe a CEV exists or, if it does, that I would like it very much. Both were poorly supported assumptions of the CEV paper. For related reasons, as the Wiki says, "Yudkowsky considered CEV obsolete almost immediately after its publication in 2004". I'm not sure why people keep discussing CEV (Nick Tarleton, and other links on the Wiki page) but I assume there are good reasons.

One could do this same calculation for all other values, and add them up to get CEV,

That doesn't sound like CEV at all. CEV is about extrapolating new values which may not be held by any actual humans. Not (just) about summing or averaging the values humans already hold.

Getting back to happiness: it's easy to say we should increase happiness, all else being equal. It's not so obvious that we should increase it at the expense of other things, or by how much. I don't think happiness is substantially different in this case from morality.

Comment author: WhySpace 07 October 2016 03:04:42PM *  0 points [-]

Maybe your point is that emotional empathy feels morally significant and when we act on it, we can feel that we fulfilled our moral obligations.

This actually has a name. It's called moral licensing.

Yes, emotional empathy does not optimize effective altruism, or your moral idea of good. But this is true of lots of emotions, desires and behaviors, including morally significant ones. You're singling out emotional empathy, but what makes it special?

I agree with you that nothing makes them special. But you seem to view this as a reductio ad absurdum. Doing the same for all other emotions which might bias us or get in the way of doing what’s moral would not lead to a balanced lifestyle, to say the least.

But we could just as easily bite that bullet. Why should we expect optimizing purely for morality to lead to a balanced lifestyle? Why wouldn’t the 80/20 rule apply to moral concerns? Under this view, one would do best to amputate most parts of one’s mind that made them human, and add parts to become a morality maximizer.

Obviously this would cause serious problems in reality, and may not actually be the best way to maximize morality even if it was possible. This is just a sort of spherical cow in a vacuum level concept.

Comment author: DanArmak 07 October 2016 04:20:10PM 0 points [-]

Even if it were the best way to maximize morality, why would you want to maximize it?

Human values are complex. Wanting to maximize one at the expense of all others implies it already is your sole value. Of course, human don't exactly converge on the subgoal of preserving their values, so the right words can (and have) convinced people to follow many single values.

Comment author: Lumifer 06 October 2016 03:11:50PM 3 points [-]

So, if the emotional empathy should be discarded, why should I help all those strangers? The only answer that the link suggests is "social propriety".

But social propriety is a fickle thing. Sometimes it asks you to forgive the debts of the destitute, and sometimes it asks you to burn the witches. Without empathy, why shouldn't you cheer at the flames licking the evil witch's body? Without empathy, if there are some kulaks or Juden standing in the way of the perfect society, why shouldn't you kill them in the most efficient manner at your disposal?

Comment author: DanArmak 06 October 2016 11:30:15PM *  0 points [-]

I completely agree: asking people to discard moral emotions is rather like asking rational agents to discard top goals!

Wikipedia says that "body-counts of modern witch-hunts by far exceed those of early-modern witch-hunting", referencing: Behringer, Wolfgang 2004: Witches and Witch-hunts. A global History. Cambridge: Polity Press.

My point being that our emotional empathy is already out of tune with social propriety, if you consider the social norms typical around the world and not just among rich, Western populations. Let alone the norms common in the West for most of its existence, and so perhaps again in the future.

Comment author: username2 06 October 2016 09:31:39PM 1 point [-]

I think this article is something that people outside of this community really ought to read.

Interesting. Why people outside of this community? I find it is actually the LW and EA communities that place an exorbitant amount of emphasis on empathy. Most of those I know outside of the rationalist community understand the healthy tradeoff between charitable action and looking out for oneself.

Comment author: DanArmak 06 October 2016 11:15:54PM 0 points [-]

This doesn't entirely match my impression of the LW community. (I know much less about the non-LW EA community.) What are you basing this on? Were there major LW posts about empathy, or LW Survey questions, or something else?

Comment author: DanArmak 06 October 2016 11:14:30PM *  2 points [-]

I'm confused by this post, and don't quite understand what its argument is.

Yes, emotional empathy does not optimize effective altruism, or your moral idea of good. But this is true of lots of emotions, desires and behaviors, including morally significant ones. You're singling out emotional empathy, but what makes it special?

If I buy an expensive gift for my father's birthday because I feel that fulfills my filial duty, you probably wouldn't tell me to de-emphasize filial piety and focus more on cognitive empathy for distant strangers. In general, I don't expect you to suggest people should spend all their resources on EA. Usually people designate a donation amount and then optimize the donation target, and it doesn't much matter what fuzzies you're spending your non-donation money on. So why de-fund emotional empathy in particular? Why not purchase fuzzies by spending money on buying treats for kittens, rather than reducing farm meat consumption?

Maybe your point is that emotional empathy feels morally significant and when we act on it, we can feel that we fulfilled our moral obligations. And then we would spend less "moral capital" on doing good. If so, you should want to de-fund all moral emotions, as long as this doesn't compromise your motivations for doing good, or your resources. Starting with most forms of love, loyalty, cleanliness and so on. Someone who genuinely feels doing good is their biggest moral concern would be a more effective altruist! But I don't think you're really suggesting e.g. not loving your family any more than distant strangers.

Maybe your main point is that empathy is a bias relative to your conscious goals:

When choosing a course of action that will make the world a better place, the strength of your empathy for victims is more likely to lead you astray that to lead you truly.

But the same can be said of pretty much any strong, morally entangled emotion. Maybe you don't want to help people who committed what you view as a moral crime, or who if helped will go on to do things you view as bad, or helping whom would send a signal to a third party that you don't want to be sent. Discounting such emotions may well match your idea of doing good. But why single out emotional empathy?

If people have an explicit definition of the good they want to accomplish, they can ignore all emotions equally. If they don't have an explicit definition, then it's just a matter of which emotions they follow in the moment, and I don't see why this one is worse than the others.

Comment author: DanArmak 06 October 2016 09:49:19PM 1 point [-]

This is a tangent, but:

You know that “four delicious tiny round brown glazed Italian chocolate cookies” is the only proper way to order these adjectives.

There are definitely some ordering rules, but I am not convinced they are nearly as universal or as complex as this suggests. See the Language Log on this subject.

Comment author: buybuydandavis 03 October 2016 09:30:10AM 0 points [-]

Don't regulate efficiency. Regulate consistency of formulation, at most.

There are plenty of actors interested in efficacy. Really, everyone else involved.

Comment author: DanArmak 03 October 2016 12:55:14PM 0 points [-]

If you don't regulate truthfullness of published efficacy info, then companies will compete on advertising and bad studies to claim efficacy of their products. I don't think that would lead to a marketplace where non-experts could reach correct conclusions about efficacy.

I have no real idea about the efficacy of most non-regulated things I'm sold, from deodorants and toothpaste to computer software. It's just that with these things, the risk of occasionally buying something bad and learning not to use that anymore is acceptable. Not so with medicine.

Comment author: ChristianKl 02 October 2016 06:49:40PM *  2 points [-]

The Chinese used to be very lax about approving medicine. They changed and decided to have higher standards that declare 80% of the medicine to be bad. This means that in the future Chinese companies that want to get new drugs on the market have to do things differently.

But if you declare >80% of local medicine bad

They are declare >80% of "1,622 clinical trials for new pharmaceutical drugs currently awaiting approval" to be bad. That doesn't mean that they take existing drugs off the market.

I'm confused, and I don't think I'm seeing the whole picture.

It's difficult to see the whole picture because the "journalists" don't really care to investigate what's happening and the information is only available between the lines. Currently there's likely strong back-door fighting going on in China.

I'm not sure to what extend this affects the approval of Big Pharma drugs that have FDA approvals. The article unfortunately doesn't speak about it.

Over the long-term it's likely a goal of the Chinese government to raise their quality standards in a way that China can export drugs to the West.

Comment author: DanArmak 03 October 2016 12:14:53AM 0 points [-]

They are declare >80% of "1,622 clinical trials for new pharmaceutical drugs currently awaiting approval" to be bad. That doesn't mean that they take existing drugs off the market.

One problem is that consumers might decide >80% of all previously approved drugs are bad, but they don't know which, so they can't trust any of them. Chinese pharma revenues will drop as everyone who can will use drugs imported from abroad. Gray markets providing bulk medicine imports will flourish, but the buyers who can afford to use them should beware of fraudulent merchandise and of plain misunderstandings and mistranslations.

Comment author: ChristianKl 02 October 2016 04:37:21PM *  8 points [-]

The article misses the point. It doesn't talk about the significance of the story.

A better headline might be "The Chinese government decided that it's in their interest to be public about data fabrication by Chinese scientists."

Given that this comes right after the Chinese government decides that it makes sense to reduce red meat consumption in China, it's a sign of progress and good Chinese leadership.

Comment author: DanArmak 02 October 2016 05:54:18PM 0 points [-]

I assume the Chinese government can't just deprive its citizens of legal medicine altogether. Either (1) enough things remain approved and enough new things keep being approved to satisfy the market for the really important remedies. Or (2) the rich will import European/American/Japanese/etc. approved medicine (which I've heard they already do to a large extent), and the poor will buy unauthorized local medicine on the black market (or unauthorized supposed imports), and be worse off than today.

But if you declare >80% of local medicine bad, and also create a huge uncertainty as to what's bad and what isn't (presumably they haven't retested all previously approved medicine), I find it hard to believe scenario 2 won't happen. And it doesn't seem to be in the Chinese government's interest if they want to improve the state of local medicine. At least not in the short to medium term.

I'm confused, and I don't think I'm seeing the whole picture.

Comment author: ChristianKl 02 October 2016 04:49:15PM *  1 point [-]

Compare and contrast with Scott Alexander's idea of making the American FDA regulate less.

The FDA actually does regulate less than the CFDA in this case. The FDA doesn't disapprove 80% of the drug seeking approval for misbehavior.

If you look at the Ranbaxy case the FDA is quite bad at detecting data forgery of generics companies.

Comment author: DanArmak 02 October 2016 05:48:42PM 0 points [-]

This raises some interesting questions.

If the end result is fraud and bad medicine, whether you regulate more or less, is that a reason to regulate less so money isn't wasted on mandatory fraudulent studies?

Regulation raises the the barrier of entry to selling medicine. Does this reduce the amount of fraud because it's harder to to sell completely untested medicine and there's at least some quality control by the regulator? Or does it increase the amount of fraud because once a drug costs huge amounts of money to develop and approve, companies are less willing to take a loss if they discover the drug doesn't really work, and so lie more?

View more: Prev | Next