You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vaniver comments on Don't ban chimp testing - Less Wrong Discussion

15 Post author: PhilGoetz 01 October 2011 05:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 01 October 2011 08:05:00PM 6 points [-]

Chimps are morally relevantly similar to human babies and toddlers

This is only true for a small subset of moralities.

Comment author: Jack 01 October 2011 08:46:31PM *  0 points [-]

But true for a large subset of Less Wrong posters' moralities.

Edit: Why downvotes?

Comment author: DanPeverley 01 October 2011 10:19:22PM 6 points [-]

You made a statement with undue confidence, and the votes would appear to indicate that at the very least, this large subset is not monitoring this thread.

Comment author: Jack 01 October 2011 10:22:36PM *  -2 points [-]

Last I checked utilitarians of various sorts were pretty common in these parts.

Comment author: Nornagest 01 October 2011 10:35:14PM 7 points [-]

Utilitarian ethics don't necessarily imply the moral equivalence of a chimp to a toddler. That's more a question of personhood criteria, which you can easily express in utilitarian or deontological terms.

I do think LWers would be a lot more likely to make that equivalence, but I suspect that's more because for various reasons we tend to think of personhood mainly in terms of cognition rather than pattern-matching against appearance and behavior.

Comment author: Jack 01 October 2011 10:51:30PM *  0 points [-]

They aren't necessarily related but for lots of reasons animal rights is associated with utilitarianism. In particular, utilitarianism tends to recommend a much lower threshold of intelligence for an animal to be due our moral consideration- since the only requirement is experiencing pleasure/pain or having desires. Personhood is usually an epiphenomenal category in utilitarianism- referring to whatever class of entities we should be morally concerned with. It is often an essential category in deontology and it's confines much stricter- see Kantianism. Utilitarianism and expanding the sphere of moral concern are historically associated as well- Jeremy Bentham, Peter Singer etc. It is not unreasonably to infer from the popularity of utilitarianism here that animal rights is also popular.

Your reason is a good one too, though. And I'm not speaking from total ignorance here, either. I've been around these parts for a few years and I've seen plenty of upvoted comments about animal rights and had one or two discussions that bare on the subject. Someone is welcome to make a poll but I don't really think making the observation is worthy of downvotes.

Comment author: Emile 02 October 2011 08:12:40AM 7 points [-]

For what it's worth, I don't care that much about animal rights; I think humans mostly care about humans; when they care about animals it's as a side effect of virtues whose primary purpose is to facilitate cooperation and peace between humans (and caring about animals is a good way of signaling those virtues).

(and I don't think intelligence and "personhood", whatever that is, have that much to do with each other.)

Comment author: [deleted] 04 October 2011 03:12:05AM 1 point [-]

when they care about animals it's as a side effect of virtues whose primary purpose is to facilitate > cooperation and peace between humans (and caring about animals is a good way of signaling those virtues).

Hmm. Last I checked, I do care about animals, regardless of signalling -- indeed, I often take a bit of a signal-hit when people see me help beetles safely across a street, or spend time cosing up to an orangutan through the glass at the zoo (thereby rendering him less-viewable by the other patrons, even though he's primarily responding to a familiar presence and displays no interest in anyone else).

There's very little sense of signalling virtue -- I'm not a vegan or vegetarian, I don't adhere to any religion with specific rules about the treatment of animals; I certainly don't find it to enhance my cooperation with other people (some people admire it, but an awful lot of them find it a bit weird or kooky).

Comment author: Emile 04 October 2011 07:49:07AM 1 point [-]

I'm a bit fuzzy about what counts as signalling and what doesn't, but I think it covers more cases than those involving conscious planning.

But anyway, I'd say you care about about animals because you're a kind person, but that humans tend to be kind mostly because evolutionarily it's been a benefit by facilitating cooperation and reciprocation. I don't know whether evolution just implemented "be kind to everything" instead of just humans because it took less lines of code (kindness to animals as spandrel), or whether kindness to animals was deliberately implemented because of it's signaling value (it may not be hard-coded, but just learnt as children).

(For what it's worth, I tend to save small bugs and throw them out of the window instead of killing them, which my wife would prefer. This device is convenient for safely and easily catching bugs, and observing them!)

Comment author: [deleted] 04 October 2011 08:02:19AM 0 points [-]

I have misunderstood your initial comment -- it sounded to me like you were saying humans don't really care about animals, but often find it desireable to signal that they do. Thanks for clarifying!

Comment author: pedanterrific 04 October 2011 03:28:43AM *  1 point [-]

I learned something new in the process of finding you a bit weird and kooky, and thereby no longer do. So, upvoted.

(I wasn't sure if beetles even had brains, which seemed somehow relevant to their moral standing, so I looked it up- and what do you know, nociception has been demonstrated in insects.)

(And beetles do have brains. Sort of.)

Comment author: [deleted] 04 October 2011 03:35:58AM 0 points [-]

Yeah, insects have brains. And pain. Many have some degree of personality differentiation, even if the space of possible variance is pretty narrow compared to humans. I certainly can't prevent most of the insects of the world from experiencing what is, to them, a hideously painful death (and indeed, have sometimes hastened that process for crickets when feeding them to pet mantises), but when I see a little dermestid beetle crawling around where it'll certainly be hit by a car, my impulse is to save it. To the extent I'm interested in justifying that, it's that I can make a difference here and now for this organism, and want to do so.

Comment author: Larks 03 October 2011 04:10:01PM 2 points [-]

About 35%, two years ago.

Comment author: Raemon 01 October 2011 08:19:59PM 0 points [-]

True, but if it's not the area in which Phil judges moral relevance, then I want to know why he thinks chimps and humans are different.

I was going to use the comparison "Humans born mentally handicapped to the point that their cognitive function is equivalent to chimps." (This avoids the potential issue of "babies grow up to be average humans.")

If you're not willing to advocate testing on humans who are similar to chimps, I want to know why.

Comment author: wedrifid 02 October 2011 11:04:46AM 3 points [-]

I was going to use the comparison "Humans born mentally handicapped to the point that their cognitive function is equivalent to chimps." (This avoids the potential issue of "babies grow up to be average humans.")

Along similar lines I was going to propose that it should be considered moral to test on "Low IQ Jocks as soon as they finish High School". After all they have finished their glory years and are different to me in similar ways to how I am different to a chimpanzee. But I decided not to post because I decided it was dangerous to go anywhere near a space including "different" and "less moral consideration".

Comment author: Raemon 02 October 2011 05:09:05PM 3 points [-]

I agree that it's dangerous, but I think any remotely productive use of this thread is going to have to. If we're not asking that question, we're not asking the right questions.

Comment author: PhilGoetz 01 October 2011 09:20:58PM *  2 points [-]

True, but if it's not the area in which Phil judges moral relevance, then I want to know why he thinks chimps and humans are different.

I do think chimps and humans are different; but most members of PETA probably believe they are more different than I do. I think you're reading positions into my post that aren't there.

If you're not willing to advocate testing on humans who are similar to chimps, I want to know why.

I advocated alternatively testing on humans like myself.

Comment author: Raemon 02 October 2011 05:13:46PM 1 point [-]

I apologize, I was focusing on a lot of the comments and missed that you had made that point.

I don't currently know what the rules are for human testing. I think it should be theoretically possible for humans to submit themselves for whatever testing they want, but I also think that as soon as that market exists, there will be those who attempt to exploit it in ways I'd consider unethical. That's a complex issue that I don't have an opinion on yet.

Comment author: Vaniver 01 October 2011 08:24:26PM 2 points [-]

I was going to use the comparison "Humans born mentally handicapped to the point that their cognitive function is equivalent to chimps." (This avoids the potential issue of "babies grow up to be average humans.")

It is not clear to me how that avoids the issue of including the future.

Comment author: Raemon 01 October 2011 08:38:25PM 0 points [-]

It avoids the issue of including the future of particular people. Some people care about that, others don't, but it reduces the range of reasons you might object to the comparison.

From what I know, I personally weight chimps as maybe 1/3 times as morally significant as humans. I'm sometimes willing to sacrifice humans to save other humans, and I'd sacrifice a chimp to save about 1/3 as many humans. (I'd also sacrifice a human to save 3x as many chimps). This is mostly an intuitive belief. I can imagine myself changing the number to something as low as 1/10th, maybe even as low as 1/100th (I don't expect to drop it that far).

It's important to note, though, that I DON'T sacrifice humans on a 1-for-1 trade off without their consent. I don't want to live in a world where someone can sacrifice me without me having a say in the matter. There may be cases where I'm willing to consent to sacrifice. I'm not sure if I can identify them right now.

There are still circumstances where, while pissed, I'd grudgingly accept that the Mastermind doing the sacrificing was right to do so. (If they had to divert a train that was going to kill a lot of people, for example. Probably more than 5 though). The number of lives saved to be worth it also has to consider how perfect the information is, and the likelihood that the sacrificer isn't running on damaged hardware.

So theoretically, I'm okay with sacrificing chimps to save arbitrarily large numbers of people, but because the chimps CAN'T consent, I'd have to be willing to sacrifice somewhere between 1/3 and 1/10th as many humans to accomplish the same thing.

Comment author: [deleted] 02 October 2011 02:32:29AM 4 points [-]

I read your post and tried to come up with an 'exchange rate' of my own, and it was much more difficult to do than I thought it would be before I tried it. I thought that it would be along the lines of thousands/hundreds of thousands of chimps == 1 human, as I couldn't conceive of letting one human die in exchange for any smaller number of chimps, but then I realized that it would be much easier to think of dead chimps as an opportunity cost, and was just reacting with instinctual revulsion. This is assuming that dead chimps can't be used (to the same extent) as live chimps to aid in medical research.)

So, what is the current value that we place on the life of a chimp? If after m (successful) studies each using n chimps, we can save l human lives, then (assuming in worst-case that each study kills n chimps): (mn)(The value of a chimp life in utilons) = l(The value of a human life in utilons) So: (mn)/l = The value of a human life/The value of a chimp life

This estimate is going to be higher than in real life, as we don't kill all the chimps used in a typical study. The difficulty would be in quantifying the number of studies necessary to save a human life, or the number of lives saved by a particular discovery.

However, thinking this way, I would place my 'exchange rate' on the order of 200-300 chimps to 1 human life; if necessary, we should let 1 human die so that 300 chimps might live so that their value as test subjects could be used to save other humans.

I just don't think chimps are intelligent enough to have significant lives on the same order of magnitude as that of a human's life; I think that 1/3 or 1/10th of a human's life is much too high a value.

Comment author: pedanterrific 02 October 2011 02:58:13AM *  2 points [-]

However, thinking this way, I would place my 'exchange rate' on the order of 200-300 chimps to 1 human life; if necessary, we should let 1 human die so that 300 chimps might live so that their value as test subjects could be used to save other humans.

I just don't think chimps are intelligent enough to have significant lives on the same order of magnitude as that of a human's life; I think that 1/3 or 1/10th of a human's life is much too high a value.

Have you corrected for your estimate of p(chimps are uplifted in the next fifty years)?

Edit: Okay, if it makes a difference I only realized the Planet of the Apes reference after I posted, I was making a serious point about the difference between human toddlers and chimps as it relates to the possibility of future personhood.

Comment author: [deleted] 03 October 2011 01:38:01PM *  1 point [-]

I hadn't considered the possibility that chimps could/would be uplifted in the near future (50 years or mean chimp lifetime is a good rule of thumb); I think it's entirely possible that the technology would be there, but I don't understand the motivation for wanting to uplift chimps. I guess the reasoning is that more sapient beings == more interesting conversations, more math proofs, more works of art, so more Fun, but I'm not sure that we would want to uplift chimps if we had the technology to do so.

If we had the technology to uplift a species, I think it would be likely that we had the technology to have FAI or uploaded human brains, which would be a more efficient way to have more sapient beings with which to talk. Is it immoral to leave other species the way they are if transhumanism or FAI take off?

Comment author: JoshuaZ 03 October 2011 01:43:48PM 1 point [-]

If we had the technology to uplift a species, I think it would be likely that we had the technology to have FAI or uploaded human brains, which would be a more efficient way to have more sapient beings with which to talk.

This seems strange to me. Can you expand on your reasoning? Uplifting seems to me to be potentially a lot simpler. The take level to identify the genes that are most responsible for human intelligence is not that much beyond our current one. And the example species you've used, chimps, are close enough to humans that it is likely that for at least some of those genes, simply inserting them into the chimp genome would likely substantially increase their intelligence.

Uplifting seems orders of magnitude easier than uploading at least.

Comment author: [deleted] 03 October 2011 03:50:40PM *  1 point [-]

I'll concede that you are probably right about uplifting being easier.

This was my reasoning: Properly identifying which gene encodes for what and usefully altering genes to express a particular phenotype as complex as human-level intelligence would require (in any reasonable amount of time) at the least a narrow AI to process and refine the huge amount of data in the half-chromosome or so that separates us from chimps. Chimps are close to humans, yes, but altering their DNA to uplift them seems to me to be the type of problem that would either take years of Manhattan-Project level dedication with the technology we have right now, or some sort of AI to do the heavy lifting for us.

I think I'm way out of my depth here, though, as I don't know enough about genetic engineering or AI research to know with confidence which would be easier.

[Edited for typos.]

Comment author: ahartell 01 October 2011 08:46:22PM *  0 points [-]

If the following is very wrong or morally abhorrent, please correct me rather than downvote. I'm trying to work it out for myself and what I came up with seems intuitively incorrect. It is also based on the idea that the mentally handicapped have chimp-like intelligence, which I don't know to be true but is implied by your comment.

So basically, what makes us homo sapiens is our ancestry, but what makes us people is our intelligence. An alien with a brain that somehow worked exactly equivalently to ours would be our equal in every important way, but an alien with a chimp-like intelligence (one that for our purposes would essentially BE a chimp) wouldn't. It would deserve sympathy, and it would be wrong to hurt it for no reason, but I wouldn't value an alien-chimp's life as highly as a human's or an alien-human. So it seems to me that it follows that the mentally handicapped (if they indeed have chimp-like intelligences) don't in fact deserve more moral consideration than alien-chimps or earth-chimps (ignoring their families which presumably have normal intelligences and would very much not approve of their use in experiments). If there are no safer ways to get the same results as we do from chimp studies, which I believe to be the case, then it the best option we have for now is to continue studying them. Studying the mentally handicapped would be as bad-but-acceptable but I wouldn't advocate for it since it would be so unlikely to ever occur. Testing on the mentally handicapped seems very wrong but only for "speciesist" reasons as far as I can tell.

Comment author: Jack 01 October 2011 09:02:59PM 0 points [-]

If you think our moral concern should follow intelligence then it follows that chimps and the mentally handicapped are not morally equal to humans of normal intelligence. Depending how much differing intelligence results in differing moral consideration this could justify chimp and mentally handicapped testing.

But while some level of intelligence does seem to be necessary for an animal to suffer in a way we find morally compelling it does not follow that abusing the slightly less intelligent is at all justified. It is not at all obvious that the mentally handicapped or chimpanzees suffer less than humans of normal intelligence. Nor is it obvious mentally handicapped humans and chimpanzees don't differ in this regard. But intelligence is almost certainly not the same thing as moral value. There are possibly entities that are very intelligent but for which we would have little moral regard.

Comment author: ahartell 01 October 2011 09:09:04PM *  0 points [-]

Right, that makes sense. I guess if something can suffer and notice it's suffering and wish it weren't suffering then it should be as morally valuable as a person...maybe.

But while some level of intelligence does seem to be necessary for an animal to suffer in a way we find morally compelling it does not follow that abusing the slightly less intelligent is at all justified.

I think dogs are "capable of suffering in a way I find morally compelling" though, and I would sacrifice probably a lot of dogs to save myself or another human. Is that just me being heartless?

There are possibly entities that are very intelligent but for which we would have little moral regard.

I mentioned that the hypothetical aliens would have brains that work just like ours, not that they would be just as intelligent.

Comment author: Jack 01 October 2011 09:12:03PM *  0 points [-]

Your method should be to figure out what it is about humans that makes them morally valuable to you and then see if those traits are found in the same degree elsewhere.

Comment author: ahartell 01 October 2011 09:13:35PM 0 points [-]

I agree.

Comment author: Raemon 03 October 2011 04:43:17PM 0 points [-]

It is also based on the idea that the mentally handicapped have chimp-like intelligence, which I don't know to be true but is implied by your comment.

I specified "people mentally handicapped to the point that they are equivalent to chimps." There's a lot of ways one can be mentally handicapped.

For the record, I'm a vegetarian. I measure morality based off median suffering/life satisfaction. Intelligence is only valuable insofar as it can improve those metrics, and certain kinds of intelligence probably result in a wider and deeper source of life satisfaction.

I don't think chimps contribute dramatically to universal flourishing, but I'm not sure that the average human does either. I think that it's best to have a rule "don't harm sentient creatures", but to occasionally turn a blind eye to certain actions that benefit us in the long term.

i.e. the guy who invented the smallpox vaccine did something horribly unethical, which we should not allow on a regular basis, especially not today when we have more options for testing. Occasionally, doing something like that is necessary for the greater good, but most people who think their actions are sufficiently "greater good" to break the rules are wrong, so we need to discourage it in general.

Comment author: JoshuaZ 03 October 2011 05:10:51PM 2 points [-]

"don't harm sentient creatures"

This is a nice rule in principle, but in practice becomes tough. First, how do we define sentience? Second, what constitutes don't harm? Is there an action/in-action distinction here? If is it morally unacceptable to let humans in the developing world starve do we have a similar moral obligation to chimps? If not, why not?

. the guy who invented the smallpox vaccine did something horribly unethical, which we should not allow on a regular basis, especially not today when we have more options for testing

I'm not sure what you are talking about here. Can you expand?

Comment author: Raemon 03 October 2011 09:07:18PM *  2 points [-]

This is a nice rule in principle, but in practice becomes tough.

Oh in practice it's definitely tough. Optimal morality is tough. I judge myself and other individuals on the efforts they've made to improve from the status quo, not on how far they fall short of what they might hypothetically be able to accomplish with infinite computing power.

In my ideal world, suffering doesn't happen, period, except to the degree that some amount of suffering is necessary bring about certain kinds of happiness. (i.e. everyone, animals included, gets exactly as much as they need, nothing more.

I don't know to what extent that's actually possible without accidentally wreaking havoc on the ecosystem and causing all kinds of problems, and in the meantime it's easier to get public support for helping other humans anyway.

Smallpox

I'm working from old memories from middle school, and referencing what is probably a bit of a "folk version" of the real thing, but my recollection was that Edward Jenner tested his smallpox vaccine on some kid, then gave the kid a full dose of smallpox without his consent.

SOMEBODY had to try that at some point, and I think Jenner had reasonable evidence, but I don't think that sort of thing would fly today.

Comment author: asr 03 October 2011 10:36:16PM 1 point [-]

SOMEBODY had to try that at some point, and I think Jenner had reasonable evidence, but I don't think that sort of thing would fly today.

I agree it wouldn't pass muster today, but that may just be because we aren't facing a disease as deadly as smallpox.

There's a good moral case for experimenting on somebody without their consent IF: 1) Doing the experiment has a high probability of getting a cure into widespread use quickly 2) Getting consent for an equivalent experiment would be difficult or time-consuming 3) The disease is prevalent and serious enough that a delay to find a consenting subject is a bigger harm than the involuntary experiment.

Comment author: Raemon 03 October 2011 10:43:29PM 0 points [-]

Agreed.

Comment author: wedrifid 03 October 2011 08:51:52PM *  1 point [-]

I think that it's best to have a rule "don't harm sentient creatures"

Unless they have it coming! I consider it unethical to not harm sentient creatures in certain circumstances.