Comment author: Lumifer 24 November 2014 08:51:26PM 1 point [-]

Well, not quite. If you think being dead has positive utility for this creature, this positive utility is not necessarily small. If so, you need to weight the issues in killing against that positive utility.

For example, let's take "death is painless" -- actually, if the negative utility of the painful death is not as great as the positive utility of dying, you would still be justified and obligated to impose that painful death upon the creature as the net result is positive utility.

Comment author: RobertWiblin 24 November 2014 10:27:46PM 1 point [-]

I was just giving what would be sufficient conditions, but they aren't all necessarily necessary.

Comment author: Lumifer 24 November 2014 07:20:10PM *  2 points [-]

Isn't a direct consequence of (2) is that those animals are better off dead than alive and so, if the opportunity to (relatively costlessly) kill some of them arises, one should do so?

Comment author: RobertWiblin 24 November 2014 08:24:11PM 1 point [-]

If you can't otherwise improve their lives, the death is painless, and murder isn't independently bad.

Comment author: Salemicus 23 November 2014 02:38:40PM 1 point [-]

Is farm chicken life is worth living?

I have no idea what that question even means. I don't want to save the Bengal tiger because I think it has a "life worth living" but because I want the species to flourish.

But to the extent that you are concerned that battery chickens have negative lives, why become a vegetarian? Eat free range meat. Or eat only hunted meat. And why make a fuss about trace amounts of meat products in your cheese or whatever? Isn't it suspicious that people who make the strange claim that animals count as objects of moral concern also make the strange claim that animal lives aren't worth living and also cash out that concern by a dietary purity ritual? Were I a cynic, I might even think that the religious-seeming ritual were the whole point, and the elaborate epicyclical theology built around it a mere after-the-fact justification.

Comment author: RobertWiblin 24 November 2014 06:58:58PM 9 points [-]

"Isn't it suspicious that people who make the strange claim that animals count as objects of moral concern also make the strange claim that animal lives aren't worth living"

No, this makes perfect sense. 1. They decide animals are objects of moral concern. 2. Look into the conditions they live in, and decide that in some cases they are worse than not being alive. 3. Decide it's wrong to fund expansion of a system that holds animals in conditions that are worse than not being alive at all.

Comment author: Princess_Stargirl 23 November 2014 03:12:32PM 10 points [-]

I found attempts to follow a vegetarian or vegan diet dramatically reduced my quality of life. Especially veganism was almost unbearable. I couldn't even have a slice of pizza or an ice cream cone! given my experience unless I was 100% convinced I was absolutely obligated to become a vegetarian/vegan I would not do so.

I do however donate 10% of my pre-tax income to developing nations. Which works out to a very large (imo) percentage of my take home pay. I also find this rather unpleasant and distressing but arguments on lesswrong convinced me I was basically obligated to do it. And losing 10% of my pre-tax income is far less painful then giving up meat and vastly less painful than giving up meat + dairy.

It is interesting people have such different internal reactions.

Comment author: RobertWiblin 24 November 2014 06:44:34PM 1 point [-]

For what it's worth, I've found being vegetarian almost no effort at all. Being vegan is a noticeable inconvenience, especially cutting out the last bits of dairy (and that shows up in your examples, which are both about dairy).

Comment author: FiftyTwo 20 November 2014 11:40:20PM 1 point [-]

If I'm unsure what position I would be most suited for can I apply for several?

Comment author: RobertWiblin 21 November 2014 06:30:14PM 0 points [-]

Yes, you can apply for whatever combination of positions you like.

Comment author: Raemon 20 November 2014 06:07:42PM 1 point [-]

Presumably, it's necessary to be able to move to Oxford?

Comment author: RobertWiblin 21 November 2014 06:29:59PM 0 points [-]

If not immediately, then at some point, yes.

Comment author: Fluttershy 20 November 2014 05:14:33AM 1 point [-]

Um, hello there-- thank you for posting this! Would it be okay if I posted some constructive criticisms of 80,000 Hours here? I wanted to ask before posting because I didn't know if you would mind, and I wanted to assure you before posting anything negative-seeming that I wouldn't intend any criticism to be taken as being a veiled insult.

Comment author: RobertWiblin 20 November 2014 12:11:38PM 1 point [-]

Hey, this doesn't seem like the best location for it. Is there a post on the 80,000 Hours or EA blogs related to your criticism you could use?

In response to 2013 Survey Results
Comment author: RobertWiblin 24 March 2014 06:19:53PM 0 points [-]

"Finally, at the end of the survey I had a question offering respondents a chance to cooperate (raising the value of a potential monetary prize to be given out by raffle to a random respondent) or defect (decreasing the value of the prize, but increasing their own chance of winning the raffle). 73% of effective altruists cooperated compared to 70% of others - an insignificant difference."

Assuming an EA thinks they will use the money better than the typical other winner, the most altruistic thing to do could be to increase their chances of winning, even at the cost of a lower prize. Or maybe they like the person putting up the prize, in which case they would prefer it to be smaller.

Comment author: SaidAchmiz 15 June 2013 02:44:27AM 0 points [-]

Eliezer and his colleagues hope to exercise a lot of control over the future. If he is inadvertently promoting bad values to those around him (e.g. it's OK to harm the weak), he is increasing the chance that any influence they have will be directed towards bad outcomes.

That has very little to do with whether Eliezer should make public declarations of things. Are you of the opinion that Eliezer does not share your view on this matter? (I don't know whether he does, personally.) If so, you should be attempting to convince him, I guess. If you think that he already agrees with you, your work is done. Public declarations would only be signaling, having little to do with maximizing good outcomes.

As for the other thing — I should think the fact that we're having some disagreement in the comments on this very post, about whether animal suffering is important, would be evidence that it's not quite as uncontroversial as you imply. I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one. Perhaps you should write one? I'd be interested in reading it.

Comment author: RobertWiblin 15 June 2013 11:07:27AM *  2 points [-]

"Public declarations would only be signaling, having little to do with maximizing good outcomes."

On the contrary, trying to influence other people in the AI community to share Eliezer's (apparent) concern for the suffering of animals is very important, for the reason given by David.

"I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one."

a) Less Wrong doesn't contain the best content on this topic. b) Most of the posts disputing whether animal suffering matter are written by un-empathetic non-realists, so we would have to discuss meta-ethics and how to deal with meta-ethical uncertainty to convince them. c) The reason has been given by Pablo Stafforini - when I directly experience the badness of suffering, I don't only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering). d) Even if there is some uncertainty about whether animal suffering is important, that would still require that it be taken quite seriously; even if there were only a 50% chance that other humans mattered, it would be bad to lock them up in horrible conditions, or signal through my actions to potentially influential people that doing so is OK.

Comment author: SaidAchmiz 15 June 2013 12:16:04AM -1 points [-]

I have to disagree on two points:

  1. I don't think that we should take this thesis ("suffering (and pleasure) are important where-ever they occur, whether in humans or mice") to be well-established and uncontroversial, even among the transhumanist/singularitarian/lesswrongian crowd.

  2. More importantly, I don't think Eliezer or people like him have any obligation to "lead the way", set examples, or be a role model, except insofar as it's necessary for him to display certain positive character traits in order for people to e.g. donate to MIRI, work for MIRI, etc. (For the record, I think Eliezer already does this; he seems, as near as I can tell, to be a pretty decent and honest guy.) It's really not necessary for him to make any public declarations or demonstrations; let's not encourage signaling for signaling's sake.

Comment author: RobertWiblin 15 June 2013 01:57:58AM 8 points [-]

Needless to say, I think 1 is settled. As for the second point - Eliezer and his colleagues hope to exercise a lot of control over the future. If he is inadvertently promoting bad values to those around him (e.g. it's OK to harm the weak), he is increasing the chance that any influence they have will be directed towards bad outcomes.

View more: Prev | Next