Richard Dawkins on vivisection: "But can they suffer?"

14 XiXiDu 04 July 2011 04:56PM

The great moral philosopher Jeremy Bentham, founder of utilitarianism, famously said,'The question is not, "Can they reason?" nor, "Can they talk?" but rather, "Can they suffer?" Most people get the point, but they treat human pain as especially worrying because they vaguely think it sort of obvious that a species' ability to suffer must be positively correlated with its intellectual capacity.

[...]

Nevertheless, most of us seem to assume, without question, that the capacity to feel pain is positively correlated with mental dexterity - with the ability to reason, think, reflect and so on. My purpose here is to question that assumption. I see no reason at all why there should be a positive correlation. Pain feels primal, like the ability to see colour or hear sounds. It feels like the sort of sensation you don't need intellect to experience. Feelings carry no weight in science but, at the very least, shouldn't we give the animals the benefit of the doubt?

[...]

I can see a Darwinian reason why there might even be be a negative correlation between intellect and susceptibility to pain. I approach this by asking what, in the Darwinian sense, pain is for. It is a warning not to repeat actions that tend to cause bodily harm. Don't stub your toe again, don't tease a snake or sit on a hornet, don't pick up embers however prettily they glow, be careful not to bite your tongue. Plants have no nervous system capable of learning not to repeat damaging actions, which is why we cut live lettuces without compunction.

It is an interesting question, incidentally, why pain has to be so damned painful. Why not equip the brain with the equivalent of a little red flag, painlessly raised to warn, "Don't do that again"?

[...] my primary question for today: would you expect a positive or a negative correlation between mental ability and ability to feel pain? Most people unthinkingly assume a positive correlation, but why?

Isn't it plausible that a clever species such as our own might need less pain, precisely because we are capable of intelligently working out what is good for us, and what damaging events we should avoid? Isn't it plausible that an unintelligent species might need a massive wallop of pain, to drive home a lesson that we can learn with less powerful inducement?

At very least, I conclude that we have no general reason to think that non-human animals feel pain less acutely than we do, and we should in any case give them the benefit of the doubt. Practices such as branding cattle, castration without anaesthetic, and bullfighting should be treated as morally equivalent to doing the same thing to human beings.

Link: boingboing.net/2011/06/30/richard-dawkins-on-v.html

Imagine a being so vast and powerful that its theory of mind of other entities would itself be a sentient entity. If this entity came across human beings, it might model those people at a level of resolution that every imagination it has of them would itself be conscious.

Just like we do not grant rights to our thoughts, or the bacteria that make up a big part of our body, such an entity might be unable to grant existential rights to its thought processes. Even if they are of an extent that when coming across a human being the mere perception of it would incorporate a human-level simulation.

But even for us humans it might not be possible to account for every being in our ethical conduct. It might not work to grant everything the rights that it does deserve. Nevertheless, the answer can not be to abandon morality altogether. If only for the reason that human nature won't permit this. It is part of our preferences to be compassionate.

Our task must be to free ourselves . . . by widening our circle of compassion to embrace all living creatures and the whole of nature and its beauty.

— Albert Einstein

How do we solve this dilemma? Right now it's relatively easy to handle. There are humans and then there is everything else. But even today — without  uplifted animals, artificial intelligence, human-level simulations, cyborgs, chimeras and posthuman beings — it is increasingly hard to draw the line. For that science is advancing rapidly, allowing us to keep alive people with severe brain injury or save a premature fetus whose mother is already dead. Then there are the mentally disabled and other humans who are not  neurotypical. We are also increasingly becoming aware that many non-human beings on this planet are far more intelligent and cognizant than expected.

And remember, as will be the case in future, it has already been the case in our not too distant past. There was a time when three different human species lived at the same time on the same planet. Three intelligent species of the homo genus, yet very different. Only 22,000 years ago we, H. sapiens, have been sharing this oasis of life with Homo floresiensis and Homo neanderthalensis.

How would we handle such a situation at the present-day? At a time when we still haven't learnt to live together in peace. At a time when we are still killing even our own genus. Most of us are not even ready to become vegetarian in the face of global warming, although livestock farming amounts to 18% of the planet’s greenhouse gas emissions.

So where do we draw the line?

Rationality Boot Camp

73 Jasen 22 March 2011 08:37AM

It’s been over a year since the Singularity Institute launched our ongoing Visiting Fellows Program and we’ve learned a lot in the process of running it.  This summer we’re going to try something different.  We’re going to run Rationality Boot Camp.

We are going to try to take ten weeks and fill them with activities meant to teach mental skills - if there's reading to be done, we'll tell you to get it done in advance.  We aren't just aiming to teach skills like betting at the right odds or learning how to take into account others' information, we're going to practice techniques like mindfulness meditation and Rejection Therapy (making requests that you know will be rejected), in order to teach focus, non-attachment, social courage and all the other things that are also needed to produce formidable rationalists.  Participants will learn how to draw (so that they can learn how to pay attention to previously unnoticed details, and see that they can do things that previously seemed like mysterious superpowers).  We will play games, and switch games every few days, to get used to novelty and practice learning.

We're going to run A/B tests on you, and track the results to find out which training activities work best, and begin the tradition of evidence-based rationality training.

In short, we're going to start constructing the kind of program that universities would run if they actually wanted to teach you how to think.

continue reading »

Diseased thinking: dissolving questions about disease

236 Yvain 30 May 2010 09:16PM

Related to: Disguised Queries, Words as Hidden Inferences, Dissolving the Question, Eight Short Studies on Excuses

Today's therapeutic ethos, which celebrates curing and disparages judging, expresses the liberal disposition to assume that crime and other problematic behaviors reflect social or biological causation. While this absolves the individual of responsibility, it also strips the individual of personhood, and moral dignity

             -- George Will, townhall.com

Sandy is a morbidly obese woman looking for advice.

Her husband has no sympathy for her, and tells her she obviously needs to stop eating like a pig, and would it kill her to go to the gym once in a while?

Her doctor tells her that obesity is primarily genetic, and recommends the diet pill orlistat and a consultation with a surgeon about gastric bypass.

Her sister tells her that obesity is a perfectly valid lifestyle choice, and that fat-ism, equivalent to racism, is society's way of keeping her down.

When she tells each of her friends about the opinions of the others, things really start to heat up.

Her husband accuses her doctor and sister of absolving her of personal responsibility with feel-good platitudes that in the end will only prevent her from getting the willpower she needs to start a real diet.

Her doctor accuses her husband of ignorance of the real causes of obesity and of the most effective treatments, and accuses her sister of legitimizing a dangerous health risk that could end with Sandy in hospital or even dead.

Her sister accuses her husband of being a jerk, and her doctor of trying to medicalize her behavior in order to turn it into a "condition" that will keep her on pills for life and make lots of money for Big Pharma.

Sandy is fictional, but similar conversations happen every day, not only about obesity but about a host of other marginal conditions that some consider character flaws, others diseases, and still others normal variation in the human condition. Attention deficit disorder, internet addiction, social anxiety disorder (as one skeptic said, didn't we used to call this "shyness"?), alcoholism, chronic fatigue, oppositional defiant disorder ("didn't we used to call this being a teenager?"), compulsive gambling, homosexuality, Aspergers' syndrome, antisocial personality, even depression have all been placed in two or more of these categories by different people.

Sandy's sister may have a point, but this post will concentrate on the debate between her husband and her doctor, with the understanding that the same techniques will apply to evaluating her sister's opinion. The disagreement between Sandy's husband and doctor centers around the idea of "disease". If obesity, depression, alcoholism, and the like are diseases, most people default to the doctor's point of view; if they are not diseases, they tend to agree with the husband.

The debate over such marginal conditions is in many ways a debate over whether or not they are "real" diseases. The usual surface level arguments trotted out in favor of or against the proposition are generally inconclusive, but this post will apply a host of techniques previously discussed on Less Wrong to illuminate the issue.

continue reading »

Welcome to Less Wrong! (2010-2011)

42 orthonormal 12 August 2010 01:08AM
This post has too many comments to show them all at once! Newcomers, please proceed in an orderly fashion to the newest welcome thread.

[Altruist Support] The Plan

9 Giles 06 May 2011 03:53AM

Here's my plan.

I intend to build a community of aspiring rational leaders. I see three components to this:

A. Becoming more rational

Becoming more rational: bringing together the spheres of self-identity, rationality and the human

I see this as being about bringing together the spheres of self-identity, rationality and the human. The human is your physical body and brain; it is the human which actually does things, and if the human isn't on board nothing will happen. Instrumental rationality is the art of achieving your goals; it should be a familiar concept to Less Wrong readers. And self-identity, among other things, is about having those goals in the first place.

The spheres will never align entirely, and it is important to recognise that we are only aspiring rationalists, and to recognize and work around our weaknesses when they can't be easily fixed.

You don't have to lead other people to be a rational leader; you might only be leading yourself. But there's no reason to be afraid of it. I see true rationalists as making good leaders.

B. People stuff

People stuff: interacting with people

In order to achieve your goals, it is likely you will need to interact with other people; to lead them, influence them, cooperate with them. You will also need to influence yourself; learn how to make yourself more effective. You may even need to go beyond basic individual interaction and deal with the issue of why people are the way they are.

This is something I believe may be a problem in the LW community: Not doing the people stuff.

C. Doing good

Doing Good

To give us a common goal, I want to find people who are interested in doing good. It doesn't have to be your only goal, and your definition of doing good doesn't have to be exactly the same as mine or anyone else's. It'll still be enough that we should co-operate.

If I could find such people, and if we could train ourselves and each other into being really effective, what would I see this organization doing?

1. Welcome

Welcoming people into the community

I would see us reaching out to altruistic individuals and organizations; finding people who are confused, who need help or who are still looking for the right approach. The idea is not to turn them all into rationalists, but rather to use our own rational skills to help and guide them. I would see this as our public face: the Altruist Support Network.

2. Think

Some brains thinking

Making a real positive difference in the world is hard. Even if you're motivated to do it, the infrastructure just isn't there to enable it. So we're going to need a lot of ideas, and ways to evaluate them. I feel certain that there are levers we can pull; small changes we can make that will have huge impacts, and that we can use rationality to help us find them. But I'll need your help.

3. Research

Research: a person with a book

Some of the thinking has been done for us: there are papers and books already written, there are communities already out there. We need to find them - we need to create a good map of the rational-doing-good landscape. And then we need to push the boundaries, to create new knowledge.

4. Fund

Funding: it all ultimately comes down to money

If we have money, we want to spend it as wisely as we can: on organizations who share our goals and who have proven themselves to be among the most effective out there. Maybe we can tempt organizations into making changes with the prospect of a donation. And, very likely, we'll need to make money ourselves: to start a business and run it rationally, making a lot of profit and giving it to the causes we support. Such an endeavour sounds very difficult but worthwhile.

5. Act

Act: a person escapes from the swivel chair of inaction

Sometimes you just need to get out there and do things. Right now I don't know what; but this organization will not be an ivory tower. It exists to serve a purpose - making the world a better place - and we'll do what we need to in order to make that happen.

I apologize that my previous posts may have seemed a bit directionless. I hope this clears it up a little; I'm planning that my next bunch of posts will be sequence-style, gradually building up the ideas I've been having from foundations that are familiar.

The main things I want to know:

  • Whether people see such an organization working
  • Whether they see it as fundamentally different from anything which currently exists
  • Whether they would want to be a part of it.

The RPG Thread

11 Raw_Power 27 June 2011 05:22PM

I thought maybe playing RPG's might help us get to know each other outside meet-ups. Specifically, I found this particular game of interest to this community: it is in many ways the antithesis of everything we stand for... which is why I think we rationalists, of all people, would appreciate it the most. It might also be a useful tool for elaborating collective thought experiments in a playful way (among many other things, it's one of the few gaming systems where roleplaying an artificial superintelligence trying to break out into the real world would be a perfectly plausible and context-relevant and plot-justifiable scenario), and help us expand and explore our idea-space further and deeper. Anyone interested in starting a game somewhere?

Other RPG's and suggestions are of course absolutely welcome.

View more: Prev