Comment author: Yvain 23 November 2013 06:55:52PM 11 points [-]

I just realized I forgot a very important question I really want to know the answer to!

"What is your 90% confidence interval for the percent of people you expect to answer 'cooperate' on the prize question?"

I've added this into the survey so that people who take it after this moment can answer. If you've taken the survey already, feel free to record your guess below (if you haven't taken the survey, don't read responses to this comment)

Comment author: threewestwinds 01 December 2013 09:11:51PM *  1 point [-]

I failed at reading comprehension - took it as "the minimum percentage of cooperation you're 90% confident in seeing" and provided one number instead of a range. ^^;;

So... 15-85 is what I meant, and sorry for the garbage answer on the survey.

Comment author: Jiro 30 July 2013 03:17:36AM *  1 point [-]

You can come up with costs - social, personal, etc. to being vegetarian - but remember to weigh those costs on the right scale.

By saying this, yoiu're trying to gloss over the very reason why becoming vegetarian is not a cheap change. Human beings are wired so as not to be able to ignore having to make many minor decisions or face many minor changes, and the fact that such things cannot be ignored means that being vegetarian actually has a high cost which involves being mentally nickel-and-dimed over and over again. It's a cheap change in the sense that you can do it without paying lots of money or spending lots of time, but that isn't sufficient to make the choice cheap in all meaningful senses.

Or to put it another way, being a vegetarian "just to try it" is like running a shareware program that pops up a nag screen every five minutes and occasionally forces you to type a random phrase in order to continue to run. Sure, it's light on your pocketbook, doesn't take much time, and reasding the nag screens and typing the phrases isn't difficult, but that's beside the point.

Comment author: threewestwinds 31 July 2013 07:05:03PM 0 points [-]

As has been mentioned elsewhere in this conversation, that's a fully general argument - it can be applied to every change one might possibly make in one's behavior.

Let's enumerate the costs, rather than just saying "there are costs."

  • Money wise, you save or break even.
  • It has no time cost in much of the US (most restaurants have vegetarian options).
  • The social cost depends on your situation - if you have people who cook for you, then you have to explain the change to them (in Washington state, this cost is tiny - people are understanding. In Texas, it is expensive).
  • The mental cost is difficult to discuss in a universal way. I found them to be rather small in my own case. Other people claim them to be quite large. But "I don't want to change my behavior because changing behavior is hard" is not terribly convincing.

Your discounting of non-human life has to be rather extreme for "I will have to remind myself to change my behavior" to out weigh an immediate, direct and calculable reduction in world suffering.

Comment author: atucker 29 July 2013 03:41:50AM 3 points [-]

All I can say is that I don't understand why intelligence is relevant for whether you care about suffering.

Intelligence is relevant for the extent to which I expect alleviating suffering to have secondary positive effects. Since I expect most of the value of suffering alleviation to come through secondary effects on the far future, I care much more about human suffering than animal suffering.

As far as I can tell, animal suffering and human suffering are comparably important from a utility-function standpoint, but the difference in EV between alleviating human and animal suffering is huge -- the difference in potential impact on the future between a suffering human vs a non-suffering human is massive compared to that between a suffering animal and a non-suffering animal.

Basically, it seems like alleviating one human's suffering has more potential to help the far future than alleviating one animal's suffering. A human who might be incapacitated to say, deal with x-risk might become helpful, while an animal is still not going to be consequential on that front.

So my opinion winds up being something like "We should help the animals, but not now, or even soon, because other issues are more important and more pressing".

Comment author: threewestwinds 30 July 2013 01:51:04AM 1 point [-]

I agree with this point entirely - but at the same time, becoming vegetarian is such a cheap change in lifestyle (given an industrialized society) that you can have your cake and eat it too. Action - such as devoting time / money to animal rights groups - has to be ballanced against other action - helping humans - but that doesn't apply very strongly to innaction - not eating meat.

You can come up with costs - social, personal, etc. to being vegetarian - but remember to weigh those costs on the right scale. And most of those costs disappear if you merely reduce meat consumption, rather than eliminate it outright.

Comment author: Vaniver 29 July 2013 01:59:39AM 7 points [-]

If we use cognitive enhancements on animals, we can turn them into highly intelligent, self-aware beings as well.

And then arguments A through E will not argue for treating the enhanced animals differently from humans.

And the argument from potentiality would also prohibit abortion or experimentation on embryos.

It would make the difference between abortion and infanticide small. It does seem to me that the arguments for allowing abortion but not allowing infanticide are weak and the most convincing one hinges on legal convenience.

I was thinking about including the argument from potentiality, but then I didn't because the post is already long and because I didn't want to make it look like I was just "knocking down a very weak argument or two".

I think this is a hazard for any "Arguments against X" post; the reason X is controversial is generally because there are many arguments on both sides, and an argument that seems strong to one person seems weak to another.

Comment author: threewestwinds 30 July 2013 01:29:37AM *  1 point [-]

What level of "potential" is required here? A human baby has a certain amount of potential to reach whatever threshold you're comparing it against - if it's fed, kept warm, not killed, etc. A pig also has a certain level of potential - if we tweak its genetics.

If we develop AI, then any given pile of sand has just as much potential to reach "human level" as an infant. I would be amused if improved engineering knowledge gave beaches moral weight (though not completely opposed to the idea).

Your proposed category - "can develop to contain morally relevant quantity X" - tends to fail along similar edge cases as whatever morally relevant quality it's replacing.

Comment author: PhilGoetz 16 July 2013 02:53:14PM 6 points [-]

I'd be interested in any specific examples of things AI workers can learn from philosophy at the present time. There has been at least one instance in the past: AI workers in the 1960s should have read Wittgenstein's discussion of games to understand a key problem with building symbolic logic systems that have an atomic symbol correspond to each dictionary word. But I can't think of any other instances.

Comment author: threewestwinds 27 July 2013 09:58:05AM 2 points [-]

Timeless decision theory, what I understand of it, bears a remarkable resemblance to Kant's Categorical Imperative. I'm re-reading Kant right now (it's been half a decade), but my primary recollection was that the categorical imperative boiled down to "make decisions not on your own behalf, but as though you decided for all rational agents in your situation."

Some related criticisms of EDT are weirdly reminiscent of Kant's critiques of other moral systems based on predicting the outcome of your actions. "Weirdly reminiscent of" rather than "reinventing" intentionally, but I try not to be too quick to dismiss older thinkers.