The great moral philosopher Jeremy Bentham, founder of utilitarianism, famously said,'The question is not, "Can they reason?" nor, "Can they talk?" but rather, "Can they suffer?" Most people get the point, but they treat human pain as especially worrying because they vaguely think it sort of obvious that a species' ability to suffer must be positively correlated with its intellectual capacity.
[...]
Nevertheless, most of us seem to assume, without question, that the capacity to feel pain is positively correlated with mental dexterity - with the ability to reason, think, reflect and so on. My purpose here is to question that assumption. I see no reason at all why there should be a positive correlation. Pain feels primal, like the ability to see colour or hear sounds. It feels like the sort of sensation you don't need intellect to experience. Feelings carry no weight in science but, at the very least, shouldn't we give the animals the benefit of the doubt?
[...]
I can see a Darwinian reason why there might even be be a negative correlation between intellect and susceptibility to pain. I approach this by asking what, in the Darwinian sense, pain is for. It is a warning not to repeat actions that tend to cause bodily harm. Don't stub your toe again, don't tease a snake or sit on a hornet, don't pick up embers however prettily they glow, be careful not to bite your tongue. Plants have no nervous system capable of learning not to repeat damaging actions, which is why we cut live lettuces without compunction.
It is an interesting question, incidentally, why pain has to be so damned painful. Why not equip the brain with the equivalent of a little red flag, painlessly raised to warn, "Don't do that again"?
[...] my primary question for today: would you expect a positive or a negative correlation between mental ability and ability to feel pain? Most people unthinkingly assume a positive correlation, but why?
Isn't it plausible that a clever species such as our own might need less pain, precisely because we are capable of intelligently working out what is good for us, and what damaging events we should avoid? Isn't it plausible that an unintelligent species might need a massive wallop of pain, to drive home a lesson that we can learn with less powerful inducement?
At very least, I conclude that we have no general reason to think that non-human animals feel pain less acutely than we do, and we should in any case give them the benefit of the doubt. Practices such as branding cattle, castration without anaesthetic, and bullfighting should be treated as morally equivalent to doing the same thing to human beings.
Link: boingboing.net/2011/06/30/richard-dawkins-on-v.html
Imagine a being so vast and powerful that its theory of mind of other entities would itself be a sentient entity. If this entity came across human beings, it might model those people at a level of resolution that every imagination it has of them would itself be conscious.
Just like we do not grant rights to our thoughts, or the bacteria that make up a big part of our body, such an entity might be unable to grant existential rights to its thought processes. Even if they are of an extent that when coming across a human being the mere perception of it would incorporate a human-level simulation.
But even for us humans it might not be possible to account for every being in our ethical conduct. It might not work to grant everything the rights that it does deserve. Nevertheless, the answer can not be to abandon morality altogether. If only for the reason that human nature won't permit this. It is part of our preferences to be compassionate.
Our task must be to free ourselves . . . by widening our circle of compassion to embrace all living creatures and the whole of nature and its beauty.
— Albert Einstein
How do we solve this dilemma? Right now it's relatively easy to handle. There are humans and then there is everything else. But even today — without uplifted animals, artificial intelligence, human-level simulations, cyborgs, chimeras and posthuman beings — it is increasingly hard to draw the line. For that science is advancing rapidly, allowing us to keep alive people with severe brain injury or save a premature fetus whose mother is already dead. Then there are the mentally disabled and other humans who are not neurotypical. We are also increasingly becoming aware that many non-human beings on this planet are far more intelligent and cognizant than expected.
And remember, as will be the case in future, it has already been the case in our not too distant past. There was a time when three different human species lived at the same time on the same planet. Three intelligent species of the homo genus, yet very different. Only 22,000 years ago we, H. sapiens, have been sharing this oasis of life with Homo floresiensis and Homo neanderthalensis.
How would we handle such a situation at the present-day? At a time when we still haven't learnt to live together in peace. At a time when we are still killing even our own genus. Most of us are not even ready to become vegetarian in the face of global warming, although livestock farming amounts to 18% of the planet’s greenhouse gas emissions.
So where do we draw the line?
It's gotten about twice as interesting since I wrote that comment. E.g. I've learned a potentially very powerful magick spell in the meantime.
"Reality-Warping Elysium" was a Terence McKenna reference; I don't remember its rationale but I don't think it was a very good one.
I think I may overstate my case sometimes; I'm a very big Gigerenzer fan, and he's one of the most cited H&B researchers. (Unlike most psychologists, Gigerenzer is a very competent statistician.) But unfortunately the researchers who are most cited by LessWrong types, e.g. Kahneman, are those whose research is of quite dubious utility. What's frustrating is that Eliezer knows of and appreciates Gigerenzer and must know of his critiques of Kahneman and his (overzealous semi-Bayesian) style of research, but he almost never cites that side of the H&B research. Kaj Sotala, a cognitive science student, has pointed out some of these things to LessWrong and yet the arguments don't seem to have entered into the LessWrong memeplex.
The two hallmarks of LessWrong are H&B and Bayesian probability: the latter is often abused, especially in the form of algorithmic probability, and decision theorists have shown that it's not as fundamental as Eliezer thought it was; and the H&B literature, like all psychology literature, is filled with premature conclusions, misinterpretations, questionable and contradictory results, and generally an overall lack of much that can be used to bolster rationality. (It's interesting and frustrating to see many papers demonstrating "biases" in opposite directions on roughly the same kind of problem, with only vague and ad hoc attempts to reconcile them.) If there's a third hallmark of LessWrong then it's microeconomics and game theory, especially Schelling's style of game theory, but unfortunately it gets relatively neglected and the posts applying Schellingian and Bayesian reasoning to complex problems of social signaling hermeneutics are very few and far-between.
I may have adjusted too much, but... Before I read a 1980s(?) version of Dawes' "Rational Choice in an Uncertain World" I had basically the standard LessWrong opinion of H&B, namely that it's flawed like all other science but you could basically take its bigger results for granted as true and meaningful; but as I read Dawes' book I felt betrayed: the research was clearly so flawed, brittle, and easily misinterpreted that there's no way building an edifice of "rationality" on top of it could be justifiable. A lot of interesting research has surely gone on since that book was written, but even so, that the foundations of the field are so shoddy indicates that the field in general might be non-negligibly cargo cult science. (Dawes even takes a totally uncalled for and totally incorrect potshot at Christians in the middle of the book; this seems relatively innocuous, but remember that Eliezer's naive readers are doing the same thing when they try to apply H&B results to the reasoning of normal/superstitious/religious folk. It's the same failure mode; you have these seemingly solid results, now you can clearly demonstrate how your enemies' reasoning is wrong and contemptible, right? It's disturbing that this attitude is held even by some of the most-respected researchers in the field.)
I remain stressed and worried about Eliezer, Anna, and Julia's new organization for similar reasons; I've seen people (e.g. myself) become much better thinkers due to hanging out with skilled thinkers like Anna, Steve Rayhawk, Peter de Blanc, Michael Vassar, et cetera; but this improvement had nothing to do with "debiasing" as such, and had everything to do with spending a lot of time in interesting conversations. I have little idea why Eliezer et al think they can give people anything more than social connections and typical self-help improvements that could be gotten from anywhere else, unless Eliezer et al plan on spending a lot of time actually talking to people about actual unsolved problems and demonstrating how rationality works in practice.
Finding a mentor or at least some peers and talking to them a lot seems to work somewhat, having high intelligence seems pretty important, not being neurotypical seems as important as high intelligence, reading a ton seems very important but I'm not sure if it's as useful for people who don't start out schizotypal. I think that making oneself more schizotypal seems like a clear win but I don't know how one would go about doing it; maybe doing a lot of nitrous or ketamine, but um, don't take my word for it. There's a fundamental skill of taking some things very seriously and other things not seriously at all that I don't know how to describe or work on directly. Yeah, I dunno; but it seems a big thing that separates the men from the boys and that is clearly doable is just reading a ton of stuff and seeing how it's connected, and building lots of models of the world based on what you read until you're skilled at coming up with off-the-cuff hypotheses. That's what I spend most of my time doing. I'm certain that getting good at chess helps your rationality skills and I think Michael Vassar agrees with me; I definitely notice that some of my chess-playing subskills for thinking about moves and counter-moves get used more generally when thinking about arguments and counter-arguments. (I'm rated like 1800 or something.)
I blame the fact the Eliezer doesn't have a sequence talking about them.