JonatasMueller comments on Decision Theory FAQ - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (467)
Even among philosophers, "moral realism" is a term wont to confuse. I'd be wary about relying on it to chunk your philosophy. For instance, the simplest and least problematic definition of 'moral realism' is probably the doctrine...
minimal moral realism: cognitivism (moral assertions like 'murder is bad' have truth-conditions, express real beliefs, predicate properties of objects, etc.) + success theory (some moral assertions are true; i.e., rejection of error theory).
This seems to be the definition endorsed on SEP's Moral Realism article. But it can't be what you have in mind, since you accept cognitivism and reject error theory. So perhaps you mean to reject a slightly stronger claim (to coin a term):
factual moral realism: MMR + moral assertions are not true or false purely by stipulation (or 'by definition'); rather, their truth-conditions at least partly involve empirical, worldly contingencies.
But here, again, it's hard to find room to reject moral realism. Perhaps some moral statements, like 'suffering is bad,' are true only by stipulation; but if 'punching people in the face causes suffering' is not also true by stipulation, then the conclusion 'punching people in the face is bad' will not be purely stipulative. Similarly, 'The Earth's equatorial circumference is ~40,075.017 km' is not true just by definition, even though we need somewhat arbitrary definitions and measurement standards to assert it. And rejecting the next doesn't sound right either:
correspondence moral realism: FMR + moral assertions are not true or false purely because of subjects' beliefs about the moral truth. For example, the truth-condition for 'eating babies is bad' are not 'Eliezer Yudkowsky thinks eating babies is bad', nor even 'everyone thinks eating babies is bad'. Our opinions do play a role in what's right and wrong, but they don't do all the work.
So perhaps one of the following is closer to what you mean to deny:
moral transexperientialism: Moral facts are nontrivially sensitive to differences wholly independent of, and having no possible impact on, conscious experience. The goodness and badness of outcomes is not purely a matter of (i.e., is not fully fixed by) their consequences for sentients. This seems kin to Mark Johnston's criterion of 'response-dependence'. Something in this vicinity seems to be an important aspect of at least straw moral realism, but it's not playing a role here.
moral unconditionalism: There is a nontrivial sense in which a single specific foundation for (e.g., axiomatization of) the moral truths is the right one -- 'objectively', and not just according to itself or any persons or arbitrarily selected authority -- and all or most of the alternatives aren't the right one. (We might compare this to the view that there is only one right set of mathematical truths, and this rightness is not trivial or circular. Opposing views include mathematical conventionalism and 'if-thenism'.)
moral non-naturalism: Moral (or, more broadly, normative) facts are objective and worldly in an even stronger sense, and are special, sui generis, metaphysically distinct from the prosaic world described by physics.
Perhaps we should further divide this view into 'moral platonism', which reduces morality to logic/math but then treats logic/math as a transcendent, eternal Realm of Thingies and Stuff; v. 'moral supernaturalism', which identifies morality more with souls and ghosts and magic and gods than with logical thingies. If this distinction isn't clear yet, perhaps we could stipulate that platonic thingies are acausal, whereas spooky supernatural moral thingies can play a role in the causal order. I think this moral supernaturalism, in the end, is what you chiefly have in mind when you criticize 'moral realism', since the idea that there are magical, irreducible Moral-in-Themselves Entities that can exert causal influences on us in their own right seems to be a prerequisite for the doctrine that any possible agent would be compelled (presumably by these special, magically moral objects or properties) to instantiate certain moral intuitions. Christianity and karma are good examples of moral supernaturalisms, since they treat certain moral or quasi-moral rules and properties as though they were irreducible physical laws or invisible sorcerors.
At the same time, it's not clear that davidpearce was endorsing anything in the vicinity of moral supernaturalism. (Though I suppose a vestigial form of this assumption might still then be playing a role in the background. It's a good thing it's nearly epistemic spring cleaning time.) His view seems somewhere in the vicinity of unconditionalism -- if he thinks anyone who disregards the interests of cows is being unconditionally epistemically irrational, and not just 'epistemically irrational given that all humans naturally care about suffering in an agent-neutral way'. The onus is then on him and pragmatist to explain on what non-normative basis we could ever be justified in accepting a normative standard.
I'm not sure this taxonomy is helpful from David Pearce's perspective. David Pearce's position is that there are universally motivating facts - facts whose truth, once known, is compelling for every possible sort of mind. This reifies his observation that the desire for happiness feels really, actually compelling to him and this compellingness seems innate to qualia, so anyone who truly knew the facts about the quale would also know that compelling sense and act accordingly. This may not correspond exactly to what SEP says under moral realism and let me know if there's a standard term, but realism seems to describe the Pearcean (or Eliezer circa 1996) feeling about the subject - that happiness is really intrinsically preferable, that this is truth and not opinion.
From my perspective this is a confusion which I claim to fully and exactly understand, which licenses my definite rejection of the hypothesis. (The dawning of this understanding did in fact cause my definite rejection of the hypothesis in 2003.) The inherent-desirableness of happiness is your mind reifying the internal data describing its motivation to do something, so if you try to use your empathy to imagine another mind fully understanding this mysterious opaque data (quale) whose content is actually your internal code for "compelled to do that", you imagine the mind being compelled to do that. You'll be agnostic about whether or not this seems supernatural because you don't actually know where the mysterious compellingness comes from. From my perspective, this is "supernatural" because your story inherently revolves around mental facts you're not allowed to reduce to nonmental facts - any reduction to nonmental facts will let us construct a mind that doesn't care once the qualia aren't mysteriously irreducibly compelling anymore. But this is a judgment I pass from reductionist knowledge - from a Pearcean perspective, there's just a mysteriously compelling quality about happiness, and to know this quale seems identical with being compelled by it; that's all your story. Well, that plus the fact that anyone who says that some minds might not be compelled by happiness, seems to be asserting that happiness is objectively unimportant or that its rightness is a matter of mere opinion, which is obviously intuitively false. (As a moral cognitivist, of course, I agree that happiness is objectively important, I just know that "important" is a judgment about a certain logical truth that other minds do not find compelling. Since in fact nothing can be intrinsically compelling to all minds, I have decided not to be an error theorist as I would have to be if I took this impossible quality of intrinsic compellingness to be an unavoidable requirement of things being good, right, valuable, or important in the intuitive emotional sense. My old intuitive confusion about qualia doesn't seem worth respecting so much that I must now be indifferent between a universe of happiness vs. a universe of paperclips. The former is still better, it's just that now I know what "better" means.)
But if the very definitions of the debate are not automatically to judge in my favor, then we should have a term for what Pearce believes that reflects what Pearce thinks to be the case. "Moral realism" seems like a good term for "the existence of facts the knowledge of which is intrinsically and universally compelling, such as happiness and subjective desire". It may not describe what a moral cognitivist thinks is really going on, but "realism" seems to describe the feeling as it would occur to Pearce or Eliezer-1996. If not this term, then what? "Moral non-naturalism" is what a moral cognitivist says to deconstruct your theory - the self-evident intrinsic compellingness of happiness quales doesn't feel like asserting "non-naturalism" to David Pearce, although you could have a non-natural theory about how this mysterious observation was generated.
It's a reasonably good description, though wanting and liking seem to be neurologically separate, such that liking does not necessarily reflect a motivation, nor vice-versa (see: Not for the sake of pleasure alone. Think the pleasurable but non-motivating effect of opioids such as heroin. Even in cases in which wanting and liking occur together, this does not necessarily invalidate the liking aspect as purely wanting.
Liking and disliking, good and bad feelings as qualia, especially in very intense amounts, seem to be intrinsically so to those who are immediately feeling them. Reasoning could extend and generalize this.
Heh. Yes, I remember reading the section on noradrenergic vs. dopaminergic motivation in Pearce's BLTC as a 16-year-old. I used to be a Pearcean, ya know, hence the Superhappies. But that distinction didn't seem very relevant to the metaethical debate at hand.
It's possible (I hope) to believe future life can be based on information-sensitive gradients of (super)intelligent well-being without remotely endorsing any of my idiosyncratic views on consciousness, intelligence or anything else. That's the beauty of hedonic recalibration. In principle at least, hedonic recalibration can enrich your quality of life and yet leave most if not all of your existing values and preference architecture intact .- including the belief that there are more important things in life than happiness.
Agreed. The conflict between the Superhappies and the Lord Pilot had nothing to do with different metaethical theories.
Also, we totally agree on wanting future civilization to contain very smart beings who are pretty happy most of the time. We just seem to disagree about whether it's important that they be super duper happy all of the time. The main relevance metaethics has to this is that once I understood there was no built-in axis of the universe to tell me that I as a good person ought to scale my intelligence as fast as possible so that I could be as happy as possible as soon as possible, I decided that I didn't really want to be super happy all the time, the way I'd always sort of accepted as a dutiful obligation while growing up reading David Pearce. Yes, it might be possible to do this in a way that would leave as much as possible of me intact, but why do it at all if that's not what I want?
There's also the important policy-relevant question of whether arbitrarily constructed AIs will make us super happy all the time or turn us into paperclips.
Huh, when I read the story, my impression was that it was Lord Pilot not understanding that it was a case of "Once you go black, you can't go back". Specifically, once you experience being superhappy, your previous metaethics stops making sense and you understand the imperative of relieving everyone of the unimaginable suffering of not being superhappy.
I thought it was relevant to this, if not, then what was meant by motivation?
Consciousness is that of which we can be most certain of, and I would rather think that we are living in a virtual world under an universe with other, alien physical laws, than that consciousness itself is not real. If it is not reducible to nonmental facts, then nonmental facts don't seem to account for everything there is of relevant.