All of davidpearce's Comments + Replies

You remark that "A physical object implementing the state-machine-which-is-us and being in a certain state is what we mean by having a unified mental state." You can stipulatively define a unified mental state in this way. But this definition is not what I (or most people) mean by "unified mental state". Science doesn't currently know why we aren't (at most) just 86 billion membrane-bound pixels of experience. 

1green_leaf
There is nothing else to be meant by that - if someone means something else by that, then it doesn't exist.

But (as far as I can tell) such a definition doesn't explain why we aren't micro-experiential zombies. Compare another fabulously complicated information-processing system, the enteric nervous system ("the brain in the gut"). Even if its individual membrane-bound neurons are micro-pixels of experience, there's no phenomenally unified subject. The challenge is to explain why the awake mind-brain is different - to derive the local and global binding of our minds and the world-simulations we run from (ultimately) from physics.    

1green_leaf
A physical object implementing the state-machine-which-is-us and being in a certain state is what we mean by having a unified mental state. Seemingly, we can ask but why does that feel like something instead of only individual microqualia feeling like something but that's a question that doesn't appreciate that there is an identity there, much like thinking that it's conceptually possible that there were hand-shape-arranged fingers but no hand. It would be meaningless to talk about a phenomenally unified subject there, since it can't describe its perception to anyone (it can't talk to us) and we can't talk to it either. On top of that, it doesn't implement the right kind of a state machine (it's not a coherent entity of the sort that we'd call it something-that-has-a-unified-mental-state).

I wish the binding problem could be solved so simply. Information flow alone isn't enough. Compare Eric Schwitzgebel ("If Materialism Is True, the United States Is Probably Conscious"). Even if 330 million skull-bound American minds reciprocally communicate by fast electromagnetic signalling, and implement any computation you can think of, then a unified continental subject of experience doesn't somehow switch on - or at least, not on pain of spooky "strong" emergence". 
The mystery is why 86 billion odd membrane-bound, effectively decohered class... (read more)

1green_leaf
The second kind of binding problem (i.e. not the physical one (how the processing of different aspects of our perception comes together) but the philosophical one (how a composite object feels like a single thing)) is solved by defining us to be the state machine implemented by that object, and our mental states to be states of that state machine. I.e. the error of people who believe there is a philosophical binding problem comes from the assumption that only ontologically fundamental objects can have a unified perception. More here: Reductionism.

Forgive me, but how do "information flows" solve the binding problem?

3green_leaf
1. "Information flow" is a real term - no need for quotes. 2. The binding problem asks how it is possible we have a unified perception if different aspects of our perception are processed in different parts of our brain. The answer is because those different parts talk to each other, which integrates the information together.

Just a note about "mind uploading". On pain of "strong" emergence, classical Turing machines can't solve the phenomenal binding problem. Their ignorance of phenomenally-bound consciousness is architecturally hardwired. Classical digital computers are zombies or (if consciousness is fundamental to the world) micro-experiential zombies, not phenomenally-bound subjects of experience with a pleasure-pain axis. Speed of execution or complexity of code make no difference: phenomenal unity isn't going to "switch on". Digital minds are an oxymoron. 

Like the poster, I worry about s-risks. I just don't think this is one of them. 

2andrew sauer
A moot point for these purposes. GAI can find other means of getting you if need be.
5green_leaf
Just very briefly: The binding problem is solved by the information flows between different parts of the classical computer.

Homunculi are real. Consider a lucid dream. When lucid, you can know that your body-image is entirely internal to your sleeping brain. You can know that the virtual head you can feel with your virtual hands is entirely internal to your sleeping brain too. Sure, the reality of this homunculus doesn’t explain how the experience is possible. Yet such an absence of explanatory power doesn’t mean that we should disavow talk of homunculi.

Waking consciousness is more controversial. But (I’d argue) you can still experience only a homunculus - but now it’s a homunculus that (normally) causally do-varies with the behaviour of an extra-cranial body.

1TAG
You seem to discussing the homunculus as something that is perceived, not something that is doing the perceiving.

It's good to know we agree on genetically phasing out the biology of suffering! 
Now for your thought-experiments.

Quantitatively, given a choice between a tiny amount of suffering X + everyone and everything else being great, or everyone dying, NU's would choose omnicide no matter how small X is?

 To avoid status quo bias, imagine you are offered the chance to create a type-identical duplicate, New Omelas - again a blissful city of vast delights dependent on the torment of a single child. Would you accept or decline? As an NU, I'd say "no" - even t... (read more)

2Daniel Kokotajlo
Thanks for the clarification!

It wasn't a rhetorical question; I really wanted (and still want) to know your answer.

Thanks for clarifying. NU certainly sounds a rather bleak ethic. But NUs want us all to have fabulously rich, wonderful, joyful lives - just not at the price of anyone else's suffering. NUs would "walk away from Omelas". Reading JDP's post, one might be forgiven for thinking that the biggest x-risk was from NUs. However, later this century and beyond, if (1) “omnicide” is technically feasible, and if (2) suffering persists, then there are intelligent agents who would brin... (read more)

Thanks for answering. FWIW I'm totally in favor of genetically engineering a world without suffering, in case that wasn't clear. Suffering is bad.

But NUs want us all to have fabulously rich, wonderful, joyful lives - just not at price of anyone else's suffering. NUs would "walk away from Omelas".

Quantitatively, given a choice between a tiny amount of suffering X + everyone and everything else being great, or everyone dying, NU's would choose omnicide no matter how small X is? Or is there an amount of suffering X such that NU's would accept it as the unfo... (read more)

Do they also seek to create and sustain a diverse variety of experiences above hedonic zero?     

Would the prospect of being unable to enjoy a rich diversity of joyful experiences sadden you? If so, then (other things being equal) any policy to promote monotonous pleasure is anti-NU.

2Daniel Kokotajlo
It wasn't a rhetorical question; I really wanted (and still want) to know your answer. (My answer to your question is yes, fwiw)

Secular Buddhists like NUs seek to minimise and ideally get rid of all experience below hedonic zero. So does any policy option cause you even the faintest hint of disappointment? Well, other things being equal, that policy option isn't NU. May all your dreams come true!

Anyhow, I hadn't intended here to mount a defence of NU ethics - just counter the poster JDP's implication that NU is necessarily more of an x-risk than CU.

2Daniel Kokotajlo
Do they also seek to create and sustain a diverse variety of experiences above hedonic zero?

Many thanks for an excellent overview. But here's a question. Does an ethic of negative utilitarianism or classical utilitarianism pose a bigger long-term risk to civilisation?

Naively, the answer is obvious. If granted the opportunity, NUs would e.g. initiate a vacuum phase transition, program seed AI with a NU utility function, and do anything humanly possible to bring life and suffering to an end. By contrast, classical utilitarians worry about x-risk and advocate Longtermism (cf. https://www.hedweb.com/quora/2015.html#longtermism).

However, I think the a... (read more)

5Daniel Kokotajlo
This is news to me; I thought NU's were advocating for the gradients of wellbeing thing only as a compromise; if they didn't have to compromise they'd just delete all life. And if we allow for compromises then CUs won't be killing off everyone in a utilitronium shockwave either. IMO both NUs and CUs are crazy.

Can preference utilitarians, classical utilitarians and negative utilitarians hammer out some kind of cosmological policy consensus? Not ideal by anyone's lights, but good enough? So long as we don't create more experience below "hedonic zero" in our forward light-cone, NUs are untroubled by wildly differing outcomes. There is clearly a tension between preference utilitarianism and classical utilitarianism; but most(?) preference utilitarians are relaxed about having hedonic ranges shifted upwards - perhaps even radically upwards - if recalibrati... (read more)

7arundelo
I'm pretty sure eli_sennesh is wondering if there's any special meaning to your responses to him all starting with his name, considering that that's not standard practice on LW (since the software keeps track of which comment a comment is a reply to).
0[anonymous]
(I think he's wondering why you preface even very short comments with an address by first name)
1[anonymous]
David, is this thing with the names a game?

Eli, it's too quick to dismiss placing moral value on all conscious creatures as "very warm-and-fuzzy". If we're psychologising, then we might equally say that working towards the well-being of all sentience reflects the cognitive style of a rule-bound hyper-systematiser. No, chickens aren't going to win any Fields medals - though chickens can recognise logical relationships and perform transitive inferences (cf. the "pecking order"). But nonhuman animals can still experience states of extreme distress. Uncontrolled panic, for example... (read more)

1[anonymous]
Hey, I already said that I actually do have some empathy and altruism for chickens. "Warm and fuzzy" isn't an insult: it's just another part of how our minds work that we don't currently understand (like consciousness). My primary point is that we should hold off on assigning huge value to things prior to actually understanding what they are and how they work.

"Health is a state of complete [sic] physical, mental and social well-being": the World Health Organization definition of health. Knb, I don't doubt that sometimes you're right. But Is phasing out the biology of involuntary suffering really too "extreme" - any more than radical life-extension or radical intelligence-amplification? When talking to anyone new to transhumanism, I try also to make the most compelling case I can for radical superlongevity and extreme superintelligence - biological, Kurzweilian and MIRI conceptions alike. Ye... (read more)

This is a difficult question. By analogy, should rich cannibals or human child abusers be legally permitted to indulge their pleasures if they offset the harm they cause with sufficiently large charitable donations to orphanages or children's charities elsewhere? On (indirect) utilitarian grounds if nothing else, we would all(?) favour an absolute legal prohibition on cannibalism and human child abuse. This analogy breaks down if the neuroscientfic evidence suggesting that pigs, for example, are at least as sentient as prelinguistic human toddlers turns out to be mistaken. I'm deeply pessimistic this is the case.

0Peter Wildeford
I wasn't speaking at all about "moral offsets". I was attempting to counter Qiaochu_Yuan's point that a high value put on eating meat by meat eaters indicates that being vegetarian is difficult.

Could you possibly say a bit more about why the mirror test is inadequate as a test of possession of a self-concept? Either way, making self-awareness a precondition of moral status has troubling implications. For example, consider what happens to verbally competent adults when feelings intense fear turn into uncontrollable panic. In states of "blind" panic, reflective self-awareness and the capacity for any kind of meta-cognition is lost. Panic disorder is extraordinarily unpleasant. Are we to make the claim that such panic-ridden states aren't ... (read more)

3Said Achmiz
Surely it is a reach to say that the mirror test, alone, with all of its methodological difficulties, can all by itself raise our probability estimate of a creature's possessing self-awareness to near-certainty? I agree that it's evidence, but calling it a test is pushing it, to say the least. To see just one reason why I might say this, consider that we can, right now, probably program a robot to pass such a test; such a robot would not be self-aware. As for the rest of your post, I'd like to take this opportunity to object to a common mistake/ploy in such discussions: "This general ethical principle/heuristic leads to absurdity if applied with the literal-mindedness of a particularly dumb algorithm, therefore reductio ad absurdum." Your argument here seems to be something like: "Adult humans are sometimes not self-aware, but we still care about them, even during those times. Is self-awareness therefore irrelevant??" No, of course it's not. It's a complex issue. But a chicken is never self-aware, so the point is moot. Also: Please provide a citation for this, and I will response, as my knowledge of this topic (cognitive capacity during states of extreme panic) is not up to giving a considered answer. Having experienced a panic attack on one or two occasions, I am inclined to agree. However, I did not lose my self-concept at those times. Finally: "Ethically entitled" is not a very useful phrase to use in isolation; utilitarianism[1] can only tell us which of two or more world-states to prefer. I've said that I prefer that dogs not be tortured, all else being equal, so if by that you mean that we ought to prefer not to induce panic states in pigs, then sure, I agree. The question is what happens when all else is not equal — which it pretty much never is. [1] You are speaking from a utilitarian position, yes? If not, then that changes things; "ethically entitled" means something quite different to a deontologist, naturally.

Birds lack a neocortex. But members of at least one species, the European magpie, have convincingly passed the "mirror test" [cf. "Mirror-Induced Behavior in the Magpie (Pica pica): Evidence of Self-Recognition" http://www.plosbiology.org/article/fetchObject.action?representation=PDF&uri=info:doi/10.1371/journal.pbio.0060202] Most ethologists recognise passing the mirror test as evidence of a self-concept. As well as higher primates (chimpanzees, orang utans, bonobos, gorillas) members of other species who have passed the mirror tes... (read more)

-2Said Achmiz
You are right, the mirror test is evidence of self-concept. I do not take it to be nearly sufficient evidence, but it is evidence. This supports my view that very young humans are not self-aware (and therefore not morally important) either.

Lumifer, should the charge of "mind-killers" be levelled at anti-speciesists or meat-eaters? (If you were being ironic, apologies for being so literal-minded.)

0wedrifid
It can be levelled at most people who use employ either of those terms.
-5Lumifer
7NotInventedHere
I'm fairly sure it's for the examples referencing the politically charged issues of racism and sexism.

Larks, by analogy, could a racist acknowledge that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect, but race is one of the things that has to be equal? If you think the "other things being equal" caveat dilutes the definition of speciesism so it's worthless, perhaps drop it - I was just trying to spike some guns.

0Larks
If we drop the caveat, anti-speciesism is obviously false. For example, moral, successful people deserve more respect than immoral unsuccessful people, even if both are of equal sentience.

Larks, all humans, even anencephalic babies, are more sentient than all Anopheles mosquitoes. So when human interests conflict irreconcilably with the interests of Anopheles mosquitoes, there is no need to conduct a careful case-by-case study of their comparative sentience. Simply identifying species membership alone is enough. By contrast, most pigs are more sentient than some humans. Unlike the antispeciesist, the speciesist claims that the interests of the human take precedence over the interests of the pig simply in virtue of species membership. (cf. h... (read more)

2Larks
I don't see how this is relevant to my argument. I'm just pointing out that your definition doesn't track the concept you (probably) have in mind; I wasn't saying anything empirical* at all. *other than about the topology of concept-space.

Vanvier, you say that you wouldn't be averse to a quick end for young human children who are not going to live to see their third birthday. What about intellectually handicapped children with potentially normal lifespans whose cognitive capacities will never surpass a typical human toddler or mature pig?

2Vaniver
I'm not sure what this would look like, actually. The first thing that comes to mind is Down's Syndrome, but the impression I get is that that's a much smaller reduction in cognitive capacity than the one you're describing. The last time I considered that issue, I favored abortion in the presence of a positive amniocentesis test for Down's, and I suspect that the more extreme the reduction, the easier it would be to come to that direction. I hope you don't mind that this answers a different question than the one you asked- I think there are significant (practical, if not also moral) differences between gamete selection, embryo selection, abortion, infanticide, and execution of adults (sorted from easiest to justify to most difficult to justify). I don't think execution of cognitively impaired adults would be justifiable in the presence of modern American economic constraints on grounds other than danger posed to others.

Vanvier, do human infants and toddlers deserve moral consideration primarily on account of their potential to become rational adult humans? Or are they valuable in themselves? Young human children with genetic disorders are given love, care and respect - even if the nature of their illness means they will never live to see their third birthday. We don't hold their lack of "potential" against them. Likewise, pigs are never going to acquire generative syntax or do calculus. But their lack of cognitive sophistication doesn't make them any less sentient.

2MixedNuts
Historically, we have dismissed very obviously sapient people as lacking moral worth (people with various mental illnesses and disabilities, and even the freaking Deaf). Since babies are going to have whatever-makes-them-people at some point, it may be more likely that they already have it and we don't notice, rather than they haven't yet. That's why I'm a lot iffier about killing babies and mentally disabled humans than pigs.
1MugaSofer
Speaking as a vegetarian for ethical reasons ... yes. That's not to say they don't deserve some moral consideration based on raw brainpower/sentience and even a degree of sentimentality, of course, but still.
4Vaniver
My intuitions say the former. I would not be averse to a quick end for young human children who are not going to live to see their third birthday. Agreed, mostly. (I think it might be meaningful to refer to syntax or math as 'senses' in the context of subjective experience and I suspect that abstract reasoning and subjective sensation of all emotions, including pain, are negatively correlated. The first weakly points towards valuing their experience less, but the second strongly points towards valuing their experience more.)

jkaufman, the dimmer-switch metaphor of consciousness is intuitively appealing. But consider some of the most intense experiences that humans can undergo, e.g. orgasm, raw agony, or blind panic. Such intense experiences are characterised by a breakdown of any capacity for abstract rational thought or reflective self-awareness. Neuroscanning evidence, too, suggests that much of our higher brain function effectively shuts down during the experience of panic or orgasm. Contrast this intensity of feeling with the subtle and rarefied phenomenology involved in e... (read more)

0Kawoomba
"Accompanied" can also mean "reflected upon after the fact". I agree with your last sentence though.

Obamacare for elephants probably doesn't rank highly in the priorities of most lesswrongers. But from an anthropocentric perspective, isn't an analogous scenario for human beings - i.e. to stay free living but not "wild" - the most utopian outcome if the MIRI conception of an Intelligence Explosion comes to pass?

RobbBB, in what sense can phenomenal agony be an "illusion"? If your pain becomes so bad that abstract thought is impossible, does your agony - or the "illusion of agony" - somehow stop? The same genes, same neurotransmitters, same anatomical pathways and same behavioural responses to noxious stimuli are found in humans and the nonhuman animals in our factory-farms. A reasonable (but unproven) inference is that factory-farmed nonhumans endure misery - or the "illusion of misery" as the eliminativist puts it - as do abused human infants and toddlers.

drnickbone, the argument that meat-eating can be ethically justified if conditions of factory-farmed animals are improved so their lives are "barely" worth living is problematic. As it stands, the argument justifies human cannibalism. Breeding human babies for the pot is potentially ethically justified because the infants in question wouldn't otherwise exist - although they are factory- farmed, runs this thought-experiment, their lives are at least "barely" worth living because they don't self-mutilate or show the grosser signs of psychological trauma. No, I'm sure you don't buy this argument - but then we shouldn't buy it for nonhuman animals either.

0Jiro
Most people would object to breeding brainless human babies for the pot, even though by definition brainless human babies are not people, cannot feel or suffer, and can be treated as objects because they are objects. This is not because breeding brainless human babies would be wrong. It's because our species has an instinctive aversion to cannibalism and an instinctive tendency to treat anything with baby-like physical features as people (which also accounts for the many anti-abortion arguments that depend on the physical attributes of the fetus).
4Jiro
For evolutionary reasons, humans have instinctive reactions to both human infants and cannibalism that are unrelated to whether a course of action is really ethical, so claiming that something is bad because it justifies eating infants is often a cheat. And if we actually started eating infants, the existence of those instincts would mean that it would be done mostly by people who lack those instincts because of brain malfunction. This would in practice lead to people with brain malfunctions controlling the project, which would quickly extend it to unethical areas regardless of whether the original version is ethical.
1drnickbone
Hmm, I can't see any obvious utilitarian approach under which a cannibal society would be justified. First, it would have to be a non-human society, or a society where humans had been substantially modified to remove their revulsion at eating other humans. Second, under total utilitarian logic, it looks like there could be more people sustained on a bare subsistence diet (all of them with lives barely worth living) than could be sustained by breeding one bunch of humans to be consumed by other humans. So total utilitarians should reject the cannibal society: ironically, it may not be repugnant enough for the Repugnant Conclusion to hold! Under the same "repugnant" logic, total utilitarians would abolish meat eating and eradicate wild animals, whenever that led to an increase in the human population. Average utilitarians would also reject the cannibal society, since they could improve the welfare of an average human by just not breeding the cannibal victims. It's less clear to me what average utilitarians should do about farm animals and wildlife. This depends on whether these animals are included in the average at equal weight with humans, or whether there are different weighting factors. If equal weighting, then eradicating all non-human animal life would increase the average welfare of what's left. This is another sort of repugnant conclusion of course. However, none of these is the strongest reason for rejecting a cannibal scenario. The strongest reason appears to be the Kantian one: it's wrong to treat human beings as means to an end. Whereas there seems to be no similar Kantian injunction against treating animals as means to an end. It's interesting that there is this asymmetry, which does initially look like outright speciesism. However, the crucial asymmetry is probably between agents who can be expected to be bound by a shared set of moral rules (including the rule of not using each other) and other beings who are not and cannot be bound by the same rul

Indeed so. Factory-farmed nonhuman animals are debeaked, tail-docked, castrated (etc) to prevent them from mutilating themselves and each other. Self-mutilitary behaviour in particular suggests an extraordinarily severe level of chronic distress. Compare how desperate human beings must be before we self-mutilate. A meat-eater can (correctly) respond that the behavioural and neuroscientific evidence that factory-farmed animals suffer a lot is merely suggestive, not conclusive. But we're not trying to defeat philosophical scepticism, just act on the best available evidence. Humans who persuade ourselves that factory-farmed animals are happy are simply kidding ourselves - we're trying to rationalise the ethically indefensible.

1drnickbone
This seems to address one of my points raised here. Self-mutilation is certainly a proxy for very low or negative quality of life, even if directly suicidal behaviour is not available (because the animal can't form a concept of suicide as a way out). If the docking, castrating etc. is to prevent mutilation of other nearby animals, that's a bit different of course. I'm very wary of deeming any life to be of negative quality unless there is very compelling evidence that the life-form itself feels the same way. Also, see my other comment: what happens if a few changes to farming practice can make the quality of life positive, even if just barely so? Does the objection to meat really go away?

SaidAchmiz, to treat exploiting and killing nonhuman animals as ethically no different from "exploiting and killing ore-bearing rocks" does not suggest a cognitively ambitious level of empathetic understanding of other subjects of experience. Isn't there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical conse... (read more)

3Said Achmiz
What the heck does this mean? (And why should I be interested in having it?) Wikipedia says: If that's how you're using "sentience", then: 1) It's not clear to me that (most) nonhuman animals have this quality; 2) This quality doesn't seem central to moral worth. So I see no irony. If you use "sentience" to mean something else, then by all means clarify. There are some other problems with your formulation, such as: 1) I don't "belong to" MIRI (which is the organization you refer to, yes?). I have donated to them, which I suppose counts? 2) Your description of their mission, specifically the implied comparison of an FAI with humans, is inaccurate. You use a lot of terms ("cognitively ambitious", "cognitively humble", "empathetic understanding", "Godlike capacity for perspective-taking" (and "the computation equivalent" thereof)) that I'm not sure how to respond to, because it seems like either these phrases are exceedingly odd ways of referring to familiar concepts, or else they are incoherent and have no referents. I'm not sure which interpretation is dictated by the principle of charity here; I don't want to just assume that I know what you're talking about. So, if you please, do clarify what you mean by... any of what you just said.
8Watercressed
No. Someone who cares about human-level beings but not animals will care about the plight of humans in the face of an AI, but there's no reason they must care about the plight of animals in the face of humans, because they didn't care about animals to begin with. It may be that the best construction for a friendly AI is some kind of complex perspective taking that lends itself to caring about animals, but this is a fact about the world; it falls on the is side of the is-ought divide.

SaidAchmiz, one difference between factory farming and the Holocaust is that the Nazis believed in the existence of an international conspiracy of the Jews to destroy the Aryan people. Humanity's only justification of exploiting and killing nonhuman animals is that we enjoy the taste of their flesh. No one believes that factory-farmed nonhuman animals have done "us" any harm. Perhaps the parallel with the (human) Holocaust fails for another reason. Pigs, for example, are at least as intelligent as prelinguistic toddlers; but are they less sentien... (read more)

1Said Achmiz
It seems to me like a far more relevant justification for exploiting and killing nonhuman animals is "and why shouldn't we do this...?", which is the same justification we use for exploiting and killing ore-bearing rocks. Which is to say, there's no moral problem with doing this, so it needs no "justification". I make it clear in this post that I don't deny the equivalence, and don't think that very young children have the moral worth of cognitively developed humans. (The optimal legality of Doing Bad Things to them is a slightly more complicated matter.) Well, I certainly do. Eh...? Expand on this, please; I'm quite unsure what you mean here.

Yes, assuming post-Everett quantum mechanics, our continued existence needn't be interpreted as evidence that Mutually Assured Destruction works, but rather as an anthropic selection effect. It's unclear why (at least in our family of branches) Hugh Everett, who certainly took his own thesis seriously, spent much of his later life working for the Pentagon targeting thermonuclear weaponry on cities. For Everett must have realised that in countless world-branches, such weapons would actually be used. Either way, the idea that Mutually Assured Destruction works could prove ethically catastrophic this century if taken seriously.

Ice9, perhaps consider uncontrollable panic. Some of the most intense forms of sentience that humans undergo seem to be associated with a breakdown of meta-cognitive capacity. So let's hope that what it's like to be an asphyxiating fish, for example, doesn't remotely resemble what it feels like to be a waterboarded human. I worry that our intuitive dimmer-switch model of consciousness, i.e. more intelligent = more sentient, may turn out to be mistaken.

Elharo, which is more interesting? Wireheading - or "the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living"? Yes, I agree, the latter certainly sounds more exciting; but "from the inside", quite the reverse. Wireheading is always enthralling, whereas everyday life is often humdrum. Likewise with so-called utilitronium. To humans, utilitronium sounds unimaginably dull and monotonous, but "from the inside" it presumably feels sublime.

However, we don't need to choo... (read more)

Elharo, I take your point, but surely we do want humans to enjoy healthy lives free from hunger and disease and safe from parasites and predators? Utopian technology promises similar blessings to nonhuman sentients too. Human and nonhuman animals alike typically flourish best when free- living but not "wild".

0elharo
I'm not quite sure what you're saying here. Could you elaborate or rephrase?

Eugine, in answer to your question: yes. If we are committed to the well-being of all sentience in our forward light-cone, then we can't simultaneously conserve predators in their existing guise. (cf. http://www.abolitionist.com/reprogramming/index.html) Humans are not obligate carnivores; and the in vitro meat revolution may shortly make this debate redundant; but it's questionable whether posthuman superintelligence committed to the well-being of all sentience could conserve humans in their existing guise either.

SaidAchmiz, you're right. The issue isn't settled: I wish it were so. The Transhumanist Declaration (1998, 2009) of the World Transhumanist Association / Humanity Plus does express a non-anthropocentric commitment to the well-being of all sentience. ["We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise" : http://humanityplus.org/philosophy/transhumanist-declaration/] But I wonder what percentage of lesswrongers would support such a far-reaching statement?

-3Said Achmiz
I certainly wouldn't, and here's why. Mentioning "non-human animals" in the same sentence and context along with humans and AIs, and "other intelligences" (implying that non-human animals may be usefully referred to as "intelligences", i.e. that they are similar to humans along the relevant dimensions here, such as intelligence, reasoning capability, etc.) reads like an attempt to smuggle in a claim by means of that implication. Now, I don't impute ignoble intent to the writers of that declaration; they may well consider the question settled, and so do not consider themselves to be making any unsupported claims. But there's clearly a claim hidden in that statement, and I'd like to see it made quite explicit, at least, even if you think it's not worth arguing for. That is, of course, apart from my belief that animals do not have intrinsic moral value. (To be truthful, I often find myself more annoyed with bad arguments than wrong beliefs or bad deeds.)

SaidAchmiz, I wonder if a more revealing question would be to ask if / when in vitro meat products of equivalent taste and price hit the market, will you switch? Lesswrong readers tend not to be technophobes, so I assume the majority(?) of lesswrongers who are not already vegetarian will make the transition. However, you say above that you are "not interested in reducing the suffering of animals". Do you mean that you are literally indifferent one way or the other to nonhuman animal suffering - in which case presumably you won't bother changing to the cruelty-free alternative? Or do you mean merely that you don't consider nonhuman animal suffering important?

-1Said Achmiz
In (current) practice those are the same, as you realize, I'm sure. My attitude is closest to something like "no amount of animal suffering adds up to any amount of human suffering", or more generally "no amount of utility to animals [to the extent that the concept of utility to a non-sapient being is coherent] adds up to any amount of utility to humans". However, note that I am skeptical of the concept of consistent aggregation of utility across individuals in general (and thus of utilitarian ethical theories, though I endorse consequentialism), so adjust your appraisal of my views accordingly. In vitro meat products could change that; that is, the existence of in vitro meat would make the two views you listed meaningfully different in practice, as you suggest. If in vitro meat cost no more than regular meat, and tasted no worse, and had no worse health consequences, and in general if there was no downside for me to switch... ... well, in that case, I would switch, with the caveat that "switch" is not exactly the right term; I simply would not care whether the meat I bought were IV or non, making my purchasing decisions based on price, taste, and all those other mundane factors by means of which people typically make their food purchasing decisions. I guess that's a longwinded way of saying that no, I wouldn't switch exclusively to IV meat if doing so cost me anything.

Eliezer, is that the right way to do the maths? If a high-status opinion-former publicly signals that he's quitting meat because it's ethically indefensible, then others are more likely to follow suit - and the chain-reaction continues. For sure, studies purportedly showing longer lifespans, higher IQs etc of vegetarians aren't very impressive because there are too many possible confounding variables. But what such studies surely do illustrate is that any health-benefits of meat-eating vs vegetarianism, if they exist, must be exceedingly subtle. Either way... (read more)

I think David is right. It is important that people who may have a big influence on the values of the future lead the way by publicly declaring and demonstrating that suffering (and pleasure) are important where-ever they occur, whether in humans or mice.

Tim, perhaps I'm mistaken; you know lesswrongers better than me. But in any such poll I'd also want to ask respondents who believe the USA is a unitary subject of experience whether they believe such a conjecture is consistent with reductive physicalism?

Wedrifid, yes, if Schwitzgebel's conjecture were true, then farewell to reductive physicalism and the ontological unity of science. The USA is a "zombie". Its functionally interconnected but skull-bound minds are individually conscious; and sometimes the behaviour of the USA as a whole is amenable to functional description; but the USA not a unitary subject of experience. However, the problem with relying on this intuitive response is that the phenomenology of our own minds seems to entail exactly the sort of strong ontological emergence we're ex... (read more)

Huh, yes, in my view C. elegans is a P-zombie. If we grant reductive physicalism, the primitive nervous system of C. elegans can't support a unitary subject of experience. At most, its individual ganglia (cf. http://www.sfu.ca/biology/faculty/hutter/hutterlab/research/Ce_nervous_system.html) may be endowed with the rudiments of unitary consciousness. But otherwise, C. elegans can effectively be modelled classically. Most of us probably wouldn't agree with philosopher Eric Schwitzgebel. ("If Materialism Is True, the United States Is Probably Conscious&... (read more)

0wedrifid
I think you're right. Mind you I suspect saying that I disagreed per se would be being generous.
0timtyler
Really? A poll seems as though it would be in order. Maybe it it explained exactly what was meant by "conscious" there might even be a consensus on the topic.

Alas so. IMO a solution to the phenomenal binding problem (cf. http://cdn.preterhuman.net/texts/body_and_health/Neurology/Binding.pdf) is critical to understanding the evolutionary success of organic robots over the past 540 million years - and why classical digital computers are (and will remain) insentient zombies, not unitary minds. This conjecture may be false; but it has the virtue of being testable. If / when our experimental apparatus allows probing the CNS at the sub-picosecond timescales above which Max Tegmark ("Why the brain is probably not... (read more)

0huh
Your first link appears to be broken. It seems possible that the OpenWorm project to emulate the brain of a C. Elegans flatworm on a classical computer may yield results prior to the advent of experimental techniques capable of " probing the CNS at ... sub-picosecond timescales." Would you consider a successful emulation of worm behavior evidence against the need for quantum effects in neuronal function, or would you declare it the worm equivalent of a P-Zombie?

Cruelty-free in vitro meat can potentially replace the flesh of all sentient beings currently used for food. Yes, it's more efficient; it also makes high-tech Jainism less of a pipedream.

I disagree with Peter Singer here. So I'm not best placed to argue his position. But Singer is acutely sensitive to the potential risks of any notion of lives not worth living. Recall Singer lost three of his grandparents in the Holocaust. Let's just say it's not obvious that an incurable victim of, say, infantile Tay–Sachs disease, who is going do die around four years old after a chronic pain-ridden existence, is better off alive. We can't ask this question to the victim: the nature of the disorder means s/he is not cognitively competent to understand th... (read more)

0Eugine_Nier
I'm not sure what you mean by "sensitive", it certainly doesn't stop him from being at the cutting edge pushing in that direction. You seem to be confusing expanding the circle of beings we care for and being more efficient in providing that caring.

Nornagest, fair point. See too "The Brain Functional Networks Associated to Human and Animal Suffering Differ among Omnivores, Vegetarians and Vegans" : http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0010847

Eugine, are you doing Peter Singer justice? What motivates Singer's position isn't a range of empathetic concern that's stunted in comparsion to people who favour the universal sanctity of human life. Rather it's a different conception of the threshold below which a life is not worth living. We find similar debates over the so-called "Logic of the Larder" for factory-farmed non-human animals: http://www.animal-rights-library.com/texts-c/salt02.htm. Actually, one may agree with Singer - both his utilitarian ethics and bleak diagnosis of some human... (read more)

3Eugine_Nier
By this logic most of the people from the past who Singer and Pinker cite as examples of less empathic individuals aren't less empathic either. But seriously, has Singer made any effort to take into account, or even look at, the preferences of any of the people who he claims have lives that aren't worth living?

On (indirect) utilitarian grounds, we may make a strong case that enshrining the sanctity of life in law will lead to better consequences than legalising infanticide. So I disagree with Singer here. But I'm not sure Singer's willingness to defend infanticide as (sometimes) the lesser evil is a counterexample to the broad sweep of the generalisation of the expanding circle. We're not talking about some Iron Law of Moral Progress.

2Eugine_Nier
If I recall correctly Singer's defense is that it's better to kill infants than have them grow up with disabilities. The logic here relies on excluding infants and to a certain extent people with disabilities from our circle of compassion. You may want to look at gwern's essay on the subject. By the time you finish taking into account all the counterexamples your generalization looks more like a case of cherry-picking examples.

The growth of science has led to a decline in animism. So in one sense, our sphere of concern has narrowed. But within the sphere of sentience, I think Singer and Pinker are broadly correct. Also, utopian technology makes even the weakest forms of benevolence vastly more effective. Consider, say, vaccination. Even if, pessimistically, one doesn't foresee any net growth in empathetic concern, technology increasingly makes the costs of benevolence trivial.

[Once again, I'm not addressing here the prospect of hypothetical paperclippers - just mind-reading humans with a pain-pleasure (dis)value axis.]

2Eugine_Nier
Would this be the same Singer who argues that there's nothing wrong with infanticide?

an expanding circle of empathetic concern needn't reflect a net gain in compassion. Naively, one might imagine that e.g. vegans are more compassionate than vegetarians. But I know of no evidence this is the case. Tellingly, female vegetarians outnumber male vegetarians by around 2:1, but the ratio of male to female vegans is roughly equal. So an expanding circle may reflect our reduced tolerance of inconsistency / cognitive dissonance. Men are more likely to be utilitarian hyper-systematisers.

4Nornagest
Does your source distinguish between motivations for vegetarianism? It's plausible that the male:female vegetarianism rates are instead motivated by (e.g.) culture-linked diet concerns -- women adopt restricted diets of all types significantly more than men -- and that ethically motivated vegetarianism occurs at similar rates, or that self-justifying ethics tend to evolve after the fact.
1Jayson_Virissimo
Right. What I should have said was:
Load More