Username comments on Open thread, 11-17 August 2014 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
My brain spontaneously generated an argument for why killing all humans might be the best way to satisfy my values. As far as I know it's original; at any rate, I don't recall seeing it before. I don't think it actually works, and I'm not going to post it on the public internet. I'm happy to just never speak of it again, but is there something else I should do?
Find out how your brain went wrong, with a view to not going so wrong again.
Playing devil's advocate here, the original poster is not that wrong. Ask any other living species on Earth and they will say their life would be better without humans around.
Apart from the fact that they wouldn't say anything (because generally animals can't speak our languages ;)), nature can be pretty bloody brutal. There are plenty of situations in which our species' existence has made the lives of other animals much better than they would otherwise be. I'm thinking of veterinary clinics that often perform work on wild animals, pets that don't have to be worried about predation, that kind of thing. Also I think there are probably a lot of species that have done alright for themselves since humans showed up, animals like crows and the equivalents in their niche around the world seem to do quite well in urban environments.
As someone who cares about animal suffering, is sympathetic to vegetarianism and veganism, and even somewhat sympathetic to more radical ideas like eradicating the world's predators, I think that humanity represents a very real possibility to decrease suffering including animal suffering in the world, especially as we grow in our ability to shape the world in the way we choose. Certainly, I think that humanity's existence provides real hope in this direction, remembering that the alternative is for animals to continue to suffer on nature's whims perhaps indefinitely, rather than ours perhaps temporarily.
Never thought of it this way. Guess in the long term it makes sense. So far, though...
Let's ask a cockroach, a tapeworm, and a decorative-breed dog :-)
Humans are leading to the extinction of many species. Given the sorts of things that happen to them in the wild, this may be an improvement.
This is too distant from the original argument to be an argument for it. I'm just playing devil's advocate recursively.
It seems I was unclear. I have no intention of attempting to kill all humans. I'm not posting the argument publicly because I don't want to run the (admittedly small) risk that someone else will read it and take it seriously. I'm just wondering if there's anything I can do with this argument that will make the world a slightly better place, instead of just not sharing it (which is mildly negative to me and neutral to everyone else - unless I've sparked anyone's curiousity, for which I apologise).
What values could possibly lead to such a choice?
Hardcore negative utilitarianism?
(Pretty cute wind-up on Smart's part; grab Popper's argument that to avoid totalitarianism we should minimize pain, not maximize happiness, then turn it around on Popper by counterarguing that his argument obliges the obliteration of humanity whenever feasible!)
Values that value animals as high or nearly as high as humans.
Not if you account for the typical suffering in nature. Humans remain the animals' best hope of ever escaping that.
It might not just be about suffering-- there's also the plausible claim that humans lead to less variety in other species.
I feel like that's a value that only works because of scope insensitivity. If the extinction of a species is as bad as killing x individuals, then when the size of the population is not near x, one of those things will dominate. But people still think about it as if they're both significant.
Why does that, um, matter?
I can see valuing animal experience, but that's all about individual animals. Species don't have moral value, and nature as a whole certainly doesn't.
A fair number of people believe that it's a moral issue if people wipe out a species, though I'm not sure if I can formalize an argument for that point of view. Anyone have some thoughts on the subject?
Would you say the same about groups of humans? Is genocide worse than killing an equal number of humans but not exterminating any one group?
I suspect that the reason we have stronger prohibitions against genocide than against random mass murder of equivalent size is not that genocide is worse, but that it is more common.
It's easier to form, motivate, and communicate the idea "Kill all the Foos!" (where there are, say, a million identifiable Foos in the country) than it is to form and communicate "Kill a million arbitrary people."
I suspect that's not actually true. The communist governments killed a lot of people in a (mostly) non-genocidal manner.
The reason we have stronger prohibitions against genocide is the same reason we have stronger prohibitions against the swastika than against the hammer and sickle. Namely, the Nazis were defeated and no longer able to defend their actions in debates while the communists had a lot of time to produce propaganda.
Wait, what? Did considering genocide more heinous than regular mass murder only start with the end of WWII?
Alternatively, killing a million people at semi-random (through poverty or war) is less conspicuous than going after a defined group.
I don't see why it should be.
Do particular cultures or, say, languages, have any value to you?
Do particular computer systems or, say, programming languages, have any value to you?
Compare your attitude to these two questions, what accounts for the difference?
Nailed it. By which I mean, this is the standard argument. I'm surprised nobody brought it up earlier.
... one way or another.
Given how long they don't live, I'd be satisfied with just preventing any further generations.
Let's suppose for a moment that's what Username meant. If Username deems other beings to be more valuable than humans, then Username, as a human, will have a hard time convincing hirself of pursuing hir own values. So I guess we're safe.
I'm not going to say what the values are, beyond that I don't think they would be surprising for a LWer to hold. Also, yes, you're safe.
But it seems like you started with disbelief in X, and you were given an example of X, and your reaction should be to now assume that there are more examples of X; and it looks like instead, you're attempting to reason about class X based on features of a particular instance of it.
I thought it was clear that "Username deems other beings to be more valuable than humans" was a particular instance of X, not a description of the entire class.
You should consider that the problem may not be in the argument, but in your beliefs about the values you think you have.
I have considered that, and I don't think it's a relevant issue in this particular case.
I'd say not to worry about it unless it's a repetitive thought.
Reform yourself. Killing all humans is axiomatically evil in my playbook, so eithar (a) you are reasoning from principles which permit Mark!evil (which makes you Mark!evil, and on my watch-list), or (b) you made a mistake. It's probably the latter.
Do you care about it? It sounds like you're responding appropriately (though IMO it's better that such arguments be public and be refuted publicly, as otherwise they present a danger to people who are smart or lucky enough to think up the argument but not the refutation). If the generation of that argument, or what it implies about your brain, is causing trouble with your life then it's worth investigating, but if it's not bothering you then such investigation might not be worth the cost.
This is the sort of thing I'm thinking about. The argument seems more robust than the obvious-to-me counterargument, so I feel that it's better to just not set people thinking about it. I'm not sure though.
Since you won't be able to kill all humans and will eventually get caught and imprisoned, the best move is to abandon your plan, accordingo to utilitarian logic.
I'm not so sure this is obvious. How much damage can one intelligent, rational, and extremely devoted person do? Certainly there are a few people in positions that obviously allow them to wipe out large swaths of humanity. Of course, getting to those positions isn't easy (yet still feasible given an early enough start!).. But I've thought about this for maybe two minutes, how many nonobvious ways would there be for someone willing to put in decades?
The usual way to rule them out without actually putting in the decades is by taking outside view and pointing at all the failures. But nobody even seems to have seriously tried. If they had, we'd have at least seen partial successes.