Comment author: Adriano_Mannino 10 August 2013 03:27:13PM *  2 points [-]

Not so sure. Dave believes that pains have an "ought-not-to-be-in-the-world-ness" property that pleasures lack. And in the discussions I have seen, he indeed was not prepared to accept that small pains can be outweighed by huge quantities of pleasure. Brian was oscillating between NLU and NU. He recently told me he found the claim convincing that such states as flow, orgasm, meditative tranquility, perfectly subjectively fine muzak, and the absence of consciousness were all equally good.

Comment author: davidpearce 28 July 2014 07:53:42AM 0 points [-]

Can preference utilitarians, classical utilitarians and negative utilitarians hammer out some kind of cosmological policy consensus? Not ideal by anyone's lights, but good enough? So long as we don't create more experience below "hedonic zero" in our forward light-cone, NUs are untroubled by wildly differing outcomes. There is clearly a tension between preference utilitarianism and classical utilitarianism; but most(?) preference utilitarians are relaxed about having hedonic ranges shifted upwards - perhaps even radically upwards - if recalibration is done safely, intelligently and conservatively - a big "if", for sure. Surrounding the sphere of sentient agents in our Local Supercluster(?) with a sea of hedonium propagated by von Neumann probes or whatever is a matter of indifference to most preference utilitarians and NUs but mandated(?) by CU.

Is this too rosy a scenario?

Comment author: [deleted] 19 July 2014 12:38:55PM 1 point [-]

David, is this thing with the names a game?

Comment author: davidpearce 19 July 2014 06:16:25PM 2 points [-]

Eli, sorry, could you elaborate? Thanks!

Comment author: [deleted] 19 July 2014 10:40:48AM 1 point [-]

Hey, I already said that I actually do have some empathy and altruism for chickens. "Warm and fuzzy" isn't an insult: it's just another part of how our minds work that we don't currently understand (like consciousness). My primary point is that we should hold off on assigning huge value to things prior to actually understanding what they are and how they work.

Comment author: davidpearce 19 July 2014 11:00:12AM 2 points [-]

Eli, fair point.

Comment author: [deleted] 13 July 2014 02:49:42PM *  8 points [-]

Usually when we say "consciousness", we mean self-awareness. It's a phenomenon of our cognition that we can't explain yet, we believe it does causal work, and if it's identical with self-awareness, it might be why we're having this conversation.

I personally don't think it has much to do with moral worth, actually. It's very warm-and-fuzzy to say we ought to place moral value on all conscious creatures, but I actually believe that a proper solution to ethics is going to dissolve the concept of "moral worth" into some components like (blatantly making names up here) "decision-theoretic empathy" (agents and instances where it's rational for me to acausally cooperate), "altruism" (using my models of others' values as a direct component of my own values, often derived from actual psychological empathy), and even "love" (outright personal attachment to another agent for my own reasons -- and we'd usually say love should imply altruism).

So we might want to be altruistic towards chickens, but I personally don't think chickens possess some magical valence that stops them from being "made of atoms I can use for something else", other than the general fact that I feel some very low level of altruism and empathy towards chickens. Or, to argue Timelessly, we might say that I ought to operate with some level of altruism for the general class of minds like mine, which includes most Earth-based animals, since the foundations of our cognitive architectures evolved very, very slowly (and often in parallel shapes, under similar selection pressures); certainly I personally generally feel a moral impulse to leave Nature alone, since I cannot treat with most of it as one equal being to another.

Consciousness definitely exists, but I think it's worth not treating it as magic.

Comment author: davidpearce 19 July 2014 08:41:53AM 1 point [-]

Eli, it's too quick to dismiss placing moral value on all conscious creatures as "very warm-and-fuzzy". If we're psychologising, then we might equally say that working towards the well-being of all sentience reflects the cognitive style of a rule-bound hyper-systematiser. No, chickens aren't going to win any Fields medals - though chickens can recognise logical relationships and perform transitive inferences (cf. the "pecking order"). But nonhuman animals can still experience states of extreme distress. Uncontrolled panic, for example, feels awful regardless of your species-identity. Such panic involves a complete absence or breakdown of reflective self-awareness - illustrating how the most intense forms of consciousness don't involve sophisticated meta-cognition.

Either way, if we can ethically justify spending, say, $100,000 salvaging a 23-week-old human micro-preemie, then impartial benevolence dictates caring for beings of greater sentience and sapience as well - or at the very least, not actively harming them.

Comment author: knb 15 November 2013 08:44:35AM 0 points [-]

I also wonder if the extreme "abolitionist" position (with regard to suffering) should be listed along anti-aging and intelligence enhancement. Abolitionism seems like it might be off-putting even to people who might support the first two planks.

Comment author: davidpearce 14 January 2014 12:27:16PM 2 points [-]

"Health is a state of complete [sic] physical, mental and social well-being": the World Health Organization definition of health. Knb, I don't doubt that sometimes you're right. But Is phasing out the biology of involuntary suffering really too "extreme" - any more than radical life-extension or radical intelligence-amplification? When talking to anyone new to transhumanism, I try also to make the most compelling case I can for radical superlongevity and extreme superintelligence - biological, Kurzweilian and MIRI conceptions alike. Yet for a large minority of people - stretching from Buddhists to wholly secular victims of chronic depression and chronic pain disorders - dealing with suffering in one guise or another is the central issue. Recall how for hundreds of millions of people in the world today, time hangs heavy - and the prospect of intelligence-amplification without improved subjective well-being leaves them cold. So your worry cuts both ways.

Anyhow, IMO the makers of the BIOPS video have done a fantastic job. Kudos. I gather future episodes of the series will tackle different conceptions of posthuman superintelligence - not least from the MIRI perspective.

Comment author: peter_hurford 02 August 2013 03:03:44AM 5 points [-]

Another interesting question is to ask current vegetarians how much they would pay to stay vegetarian.

Comment author: davidpearce 03 August 2013 01:14:30PM 2 points [-]

This is a difficult question. By analogy, should rich cannibals or human child abusers be legally permitted to indulge their pleasures if they offset the harm they cause with sufficiently large charitable donations to orphanages or children's charities elsewhere? On (indirect) utilitarian grounds if nothing else, we would all(?) favour an absolute legal prohibition on cannibalism and human child abuse. This analogy breaks down if the neuroscientfic evidence suggesting that pigs, for example, are at least as sentient as prelinguistic human toddlers turns out to be mistaken. I'm deeply pessimistic this is the case.

Comment author: SaidAchmiz 01 August 2013 03:04:52PM -2 points [-]

You are right, the mirror test is evidence of self-concept. I do not take it to be nearly sufficient evidence, but it is evidence.

Humans generally fail the mirror test below the age of eighteen months.

This supports my view that very young humans are not self-aware (and therefore not morally important) either.

Comment author: davidpearce 01 August 2013 04:57:17PM 4 points [-]

Could you possibly say a bit more about why the mirror test is inadequate as a test of possession of a self-concept? Either way, making self-awareness a precondition of moral status has troubling implications. For example, consider what happens to verbally competent adults when feelings intense fear turn into uncontrollable panic. In states of "blind" panic, reflective self-awareness and the capacity for any kind of meta-cognition is lost. Panic disorder is extraordinarily unpleasant. Are we to make the claim that such panic-ridden states aren't themselves important - only the memories of such states that a traumatised subject reports when s/he regains a measure of composure and some semblance of reflective self-awareness is restored? A pig, for example, or a prelinguistic human toddler, doesn't have the meta-cognitive capacity to self-reflect on such states. But I don't think we are ethically entitled to induce them - any more than we are ethically entitled to waterboard a normal adult human. I would hope posthuman superintelligence can engineer such states out of existence - in human and nonhuman animals alike.

Comment author: SaidAchmiz 01 August 2013 02:19:51AM 0 points [-]

Ok. Yes, I think that nonhuman animals are not self-aware. (Dolphins might be an exception. This is a particularly interesting recent study.)

Dolphins aside, we have no reason to believe that animals are capable of thinking about themselves; of considering their own conscious awareness; of having any self-concept, much less any concept of themselves as persistent conscious entities with a past and a future; of consciously reasoning about other minds, or having any concept thereof; or of engaging in abstract reasoning or thought of any kind.

I've commented before that one critical difference between "speciesism" and racism or sexism or other such prejudices is that a cow can never argue for its own equal treatment; this, I have said, is not a trivial or irrelevant fact. And it's not just a matter of not having the vocal cords to speak, or of not knowing the language, or any other such trivial obstacles to communication; a cow can't even come close to having the concepts required to understand human behavior, human concepts, and human language.

Now, you might not think any of this is morally relevant. Fine. But I would meet with great skepticism — and, sans compelling evidence, probable outright dismissal — any claim that a cow, or a pig, or, even more laughably, a chicken, is self-aware in anything like the sense I outlined above.

(By the way, I am reluctant to commit to any position on "consciousness", merely because the word is used in such a diverse range of ways.)

Comment author: davidpearce 01 August 2013 08:43:55AM 5 points [-]

Birds lack a neocortex. But members of at least one species, the European magpie, have convincingly passed the "mirror test" [cf. "Mirror-Induced Behavior in the Magpie (Pica pica): Evidence of Self-Recognition" http://www.plosbiology.org/article/fetchObject.action?representation=PDF&uri=info:doi/10.1371/journal.pbio.0060202] Most ethologists recognise passing the mirror test as evidence of a self-concept. As well as higher primates (chimpanzees, orang utans, bonobos, gorillas) members of other species who have passed the mirror test include elephants, orcas and bottlenose dolphins. Humans generally fail the mirror test below the age of eighteen months.

Comment author: Lumifer 31 July 2013 03:21:51PM 7 points [-]

the same can be said of race: I may subjectively prefer white people.

Yes. That's perfectly fine. In fact, if you examine the revealed preferences (e.g. who people prefer to have as their neighbours or who do they prefer to marry) you will see that most people in reality do prefer others of their own race.

And, of course, the same can be said of sex, too. Unless you are an evenhanded bi, you're most certainly guilty of preferring some specific sex (or maybe gender, it varies).

You might bite the bullet here and say that yes, in fact, racism, sexism etc. is morally acceptable

"Morally acceptable" is a judgement, it is conditional on which morality you're using as your standard. Different moralities will produce different moral acceptability for the same actions.

Perhaps you wanted to say "socially acceptable"? In particular, "socially acceptable in contemporary US"? That, of course, is a very different thing.

I think most people would agree that these __isms are wrong, and so speciesism must also be wrong.

Sigh. This is a rationality forum, no? And you're using emotionally charged guilt-by-association arguments? (it's actually designed guilt-by-association since the word "speciesism" was explicitly coined to resemble "racism", etc.).

Warning: HERE BE MIND-KILLERS!

Comment author: davidpearce 31 July 2013 08:22:28PM 2 points [-]

Lumifer, should the charge of "mind-killers" be levelled at anti-speciesists or meat-eaters? (If you were being ironic, apologies for being so literal-minded.)

Comment author: Larks 31 July 2013 03:34:55PM 1 point [-]

I don't see how this is relevant to my argument. I'm just pointing out that your definition doesn't track the concept you (probably) have in mind; I wasn't saying anything empirical* at all.

*other than about the topology of concept-space.

Comment author: davidpearce 31 July 2013 04:21:42PM 2 points [-]

Larks, by analogy, could a racist acknowledge that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect, but race is one of the things that has to be equal? If you think the "other things being equal" caveat dilutes the definition of speciesism so it's worthless, perhaps drop it - I was just trying to spike some guns.

View more: Next