But (as far as I can tell) such a definition doesn't explain why we aren't micro-experiential zombies. Compare another fabulously complicated information-processing system, the enteric nervous system ("the brain in the gut"). Even if its individual membrane-bound neurons are micro-pixels of experience, there's no phenomenally unified subject. The challenge is to explain why the awake mind-brain is different - to derive the local and global binding of our minds and the world-simulations we run from (ultimately) from physics.
I wish the binding problem could be solved so simply. Information flow alone isn't enough. Compare Eric Schwitzgebel ("If Materialism Is True, the United States Is Probably Conscious"). Even if 330 million skull-bound American minds reciprocally communicate by fast electromagnetic signalling, and implement any computation you can think of, then a unified continental subject of experience doesn't somehow switch on - or at least, not on pain of spooky "strong" emergence".
The mystery is why 86 billion odd membrane-bound, effectively decohered class...
Forgive me, but how do "information flows" solve the binding problem?
Just a note about "mind uploading". On pain of "strong" emergence, classical Turing machines can't solve the phenomenal binding problem. Their ignorance of phenomenally-bound consciousness is architecturally hardwired. Classical digital computers are zombies or (if consciousness is fundamental to the world) micro-experiential zombies, not phenomenally-bound subjects of experience with a pleasure-pain axis. Speed of execution or complexity of code make no difference: phenomenal unity isn't going to "switch on". Digital minds are an oxymoron.
Like the poster, I worry about s-risks. I just don't think this is one of them.
Homunculi are real. Consider a lucid dream. When lucid, you can know that your body-image is entirely internal to your sleeping brain. You can know that the virtual head you can feel with your virtual hands is entirely internal to your sleeping brain too. Sure, the reality of this homunculus doesn’t explain how the experience is possible. Yet such an absence of explanatory power doesn’t mean that we should disavow talk of homunculi.
Waking consciousness is more controversial. But (I’d argue) you can still experience only a homunculus - but now it’s a homunculus that (normally) causally do-varies with the behaviour of an extra-cranial body.
It's good to know we agree on genetically phasing out the biology of suffering!
Now for your thought-experiments.
Quantitatively, given a choice between a tiny amount of suffering X + everyone and everything else being great, or everyone dying, NU's would choose omnicide no matter how small X is?
To avoid status quo bias, imagine you are offered the chance to create a type-identical duplicate, New Omelas - again a blissful city of vast delights dependent on the torment of a single child. Would you accept or decline? As an NU, I'd say "no" - even t...
It wasn't a rhetorical question; I really wanted (and still want) to know your answer.
Thanks for clarifying. NU certainly sounds a rather bleak ethic. But NUs want us all to have fabulously rich, wonderful, joyful lives - just not at the price of anyone else's suffering. NUs would "walk away from Omelas". Reading JDP's post, one might be forgiven for thinking that the biggest x-risk was from NUs. However, later this century and beyond, if (1) “omnicide” is technically feasible, and if (2) suffering persists, then there are intelligent agents who would brin...
Do they also seek to create and sustain a diverse variety of experiences above hedonic zero?
Would the prospect of being unable to enjoy a rich diversity of joyful experiences sadden you? If so, then (other things being equal) any policy to promote monotonous pleasure is anti-NU.
Secular Buddhists like NUs seek to minimise and ideally get rid of all experience below hedonic zero. So does any policy option cause you even the faintest hint of disappointment? Well, other things being equal, that policy option isn't NU. May all your dreams come true!
Anyhow, I hadn't intended here to mount a defence of NU ethics - just counter the poster JDP's implication that NU is necessarily more of an x-risk than CU.
Many thanks for an excellent overview. But here's a question. Does an ethic of negative utilitarianism or classical utilitarianism pose a bigger long-term risk to civilisation?
Naively, the answer is obvious. If granted the opportunity, NUs would e.g. initiate a vacuum phase transition, program seed AI with a NU utility function, and do anything humanly possible to bring life and suffering to an end. By contrast, classical utilitarians worry about x-risk and advocate Longtermism (cf. https://www.hedweb.com/quora/2015.html#longtermism).
However, I think the a...
Can preference utilitarians, classical utilitarians and negative utilitarians hammer out some kind of cosmological policy consensus? Not ideal by anyone's lights, but good enough? So long as we don't create more experience below "hedonic zero" in our forward light-cone, NUs are untroubled by wildly differing outcomes. There is clearly a tension between preference utilitarianism and classical utilitarianism; but most(?) preference utilitarians are relaxed about having hedonic ranges shifted upwards - perhaps even radically upwards - if recalibrati...
Eli, sorry, could you elaborate? Thanks!
Eli, fair point.
Eli, it's too quick to dismiss placing moral value on all conscious creatures as "very warm-and-fuzzy". If we're psychologising, then we might equally say that working towards the well-being of all sentience reflects the cognitive style of a rule-bound hyper-systematiser. No, chickens aren't going to win any Fields medals - though chickens can recognise logical relationships and perform transitive inferences (cf. the "pecking order"). But nonhuman animals can still experience states of extreme distress. Uncontrolled panic, for example...
"Health is a state of complete [sic] physical, mental and social well-being": the World Health Organization definition of health. Knb, I don't doubt that sometimes you're right. But Is phasing out the biology of involuntary suffering really too "extreme" - any more than radical life-extension or radical intelligence-amplification? When talking to anyone new to transhumanism, I try also to make the most compelling case I can for radical superlongevity and extreme superintelligence - biological, Kurzweilian and MIRI conceptions alike. Ye...
This is a difficult question. By analogy, should rich cannibals or human child abusers be legally permitted to indulge their pleasures if they offset the harm they cause with sufficiently large charitable donations to orphanages or children's charities elsewhere? On (indirect) utilitarian grounds if nothing else, we would all(?) favour an absolute legal prohibition on cannibalism and human child abuse. This analogy breaks down if the neuroscientfic evidence suggesting that pigs, for example, are at least as sentient as prelinguistic human toddlers turns out to be mistaken. I'm deeply pessimistic this is the case.
Could you possibly say a bit more about why the mirror test is inadequate as a test of possession of a self-concept? Either way, making self-awareness a precondition of moral status has troubling implications. For example, consider what happens to verbally competent adults when feelings intense fear turn into uncontrollable panic. In states of "blind" panic, reflective self-awareness and the capacity for any kind of meta-cognition is lost. Panic disorder is extraordinarily unpleasant. Are we to make the claim that such panic-ridden states aren't ...
Birds lack a neocortex. But members of at least one species, the European magpie, have convincingly passed the "mirror test" [cf. "Mirror-Induced Behavior in the Magpie (Pica pica): Evidence of Self-Recognition" http://www.plosbiology.org/article/fetchObject.action?representation=PDF&uri=info:doi/10.1371/journal.pbio.0060202] Most ethologists recognise passing the mirror test as evidence of a self-concept. As well as higher primates (chimpanzees, orang utans, bonobos, gorillas) members of other species who have passed the mirror tes...
Lumifer, should the charge of "mind-killers" be levelled at anti-speciesists or meat-eaters? (If you were being ironic, apologies for being so literal-minded.)
You remark that "A physical object implementing the state-machine-which-is-us and being in a certain state is what we mean by having a unified mental state." You can stipulatively define a unified mental state in this way. But this definition is not what I (or most people) mean by "unified mental state". Science doesn't currently know why we aren't (at most) just 86 billion membrane-bound pixels of experience.