[deleted]

Wiki Contributions

Comments

Sorted by

Effective altruism does not imply utilitarianism. Utilitarianism (on most definitions) does not imply hedonism. I would guess less than 10% of EAs (or of animal-focused EAs) would consider themselves thoroughgoing hedonists, of the kind that would endorse e.g. injecting a substance that would numb humans to physical pain or amputating human body parts, if this reduced suffering even a little bit. So on the contrary, I think the objection is relevant.

Thanks for the post. I agree that this is an important question. I do, however, have many disagreements.

  1. Many of us value things other than pleasure and the avoidance of pain when it comes to humans. Perhaps we ought to value these things also when it comes to non-human animals. It seems difficult to defend hedonism for non-human animals while rejecting it for humans. What is the relevant difference?
  2. The obvious alternative is to take animals out of the equation altogether with cultured meat. Lewis Bollard makes this point explicitly: "I think when people start to talk about completely re-engineering the minds of chickens so that they’re essentially brain-dead and don’t realise the environment they’re in, it just seems like a better option to only grow the meat part of the bird and not grow the mind at all." The solutions you propose, like to "identify and surgically or chemically remove the part of the brain that is responsible for suffering" are highly speculative with current science. One of your examples is literally science fiction. It seems to me that cultured meat would be easier to achieve technologically, while being similar or superior in consumer acceptance. Marie Gibbons claims that we might "see clean meat sold commercially within a year". Metaculus thinks the probability of a restaurant serving cultured meat by 2021 is 75%.
  3. For a short argument with two major flaws, I found the tone unnecessarily dismissive. You had had a short conversation about your new idea at EA Global, and concluded that your idea is not being widely pushed by others who work in this area. At this point it would have been appropriate to think of some obvious counter-arguments. Instead in this post I see a lot of speculation about which impure motives and fallacies in reasoning could explain why your idea hasn't been adopted. Some quotes: "This emphasis is quite understandable emotionally.", "this kind of anthropomorphizing", ""What do you mean, cut off baby chicken's legs so it does not have leg pain later? You, monster!"", "Because most people do not truly care about reducing animal suffering, they care about a different metric altogether, a visible human proxy for animal suffering that they find immediately relatable.", "actual suffering".

Hundreds of economists study this topic. Why not just read some books or papers?

This Berkeley Dad TRIPLED His Karma With One Weird Trick (Mods HATE Him!):

  1. Go To Your Profile
  2. Press 'Ctrl' And '+' At The Same Time
  3. Repeat Until Attain Desired Karma Level

This is a good start but you really need to implement differential kerning. Lofty words like 'Behoove' and 'Splendiferous' must be given the full horizontal space commanded by their dignity.

I've updated the style to work at the new URL lesswrong.com.

I strongly agree. Despite appearances, I wouldn't say someone with 10 bitcoin today has "won" at all. Winning means getting more of what you ultimately care about, like goods and services. You only win if you convert your bitcoin into goods or dollars at the right time. I am reminded of "buy low, sell high": an empty phrase that can sound deceptively like good investment advice.

Okay, this project overshot my expectations. Congratulations to the winners!

Ord (2014) is all I'm aware of and it's unpublished.

Load More