Posts

Sorted by New

Wiki Contributions

Comments

I failed at reading comprehension - took it as "the minimum percentage of cooperation you're 90% confident in seeing" and provided one number instead of a range. ^^;;

So... 15-85 is what I meant, and sorry for the garbage answer on the survey.

As has been mentioned elsewhere in this conversation, that's a fully general argument - it can be applied to every change one might possibly make in one's behavior.

Let's enumerate the costs, rather than just saying "there are costs."

  • Money wise, you save or break even.
  • It has no time cost in much of the US (most restaurants have vegetarian options).
  • The social cost depends on your situation - if you have people who cook for you, then you have to explain the change to them (in Washington state, this cost is tiny - people are understanding. In Texas, it is expensive).
  • The mental cost is difficult to discuss in a universal way. I found them to be rather small in my own case. Other people claim them to be quite large. But "I don't want to change my behavior because changing behavior is hard" is not terribly convincing.

Your discounting of non-human life has to be rather extreme for "I will have to remind myself to change my behavior" to out weigh an immediate, direct and calculable reduction in world suffering.

I agree with this point entirely - but at the same time, becoming vegetarian is such a cheap change in lifestyle (given an industrialized society) that you can have your cake and eat it too. Action - such as devoting time / money to animal rights groups - has to be ballanced against other action - helping humans - but that doesn't apply very strongly to innaction - not eating meat.

You can come up with costs - social, personal, etc. to being vegetarian - but remember to weigh those costs on the right scale. And most of those costs disappear if you merely reduce meat consumption, rather than eliminate it outright.

What level of "potential" is required here? A human baby has a certain amount of potential to reach whatever threshold you're comparing it against - if it's fed, kept warm, not killed, etc. A pig also has a certain level of potential - if we tweak its genetics.

If we develop AI, then any given pile of sand has just as much potential to reach "human level" as an infant. I would be amused if improved engineering knowledge gave beaches moral weight (though not completely opposed to the idea).

Your proposed category - "can develop to contain morally relevant quantity X" - tends to fail along similar edge cases as whatever morally relevant quality it's replacing.

Timeless decision theory, what I understand of it, bears a remarkable resemblance to Kant's Categorical Imperative. I'm re-reading Kant right now (it's been half a decade), but my primary recollection was that the categorical imperative boiled down to "make decisions not on your own behalf, but as though you decided for all rational agents in your situation."

Some related criticisms of EDT are weirdly reminiscent of Kant's critiques of other moral systems based on predicting the outcome of your actions. "Weirdly reminiscent of" rather than "reinventing" intentionally, but I try not to be too quick to dismiss older thinkers.