Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Stuart_Armstrong 15 March 2013 08:55:01AM 0 points [-]

Bad, negative, unpleasant, all possess partial semantic correspondence, which justifies their being a value.

Then they are no longer purely descriptive, and I can't agree that they are logically or empirically true.

Comment author: JonatasMueller 16 March 2013 12:48:59AM 2 points [-]

Apart from that, what do you think of the other points? If you wish, we could continue a conversation on another online medium.

Comment author: JonatasMueller 13 March 2013 11:36:37PM *  1 point [-]

Bad, negative, unpleasant, all possess partial semantic correspondence, which justifies their being a value.

The normative claims in this case need not be definitive and overruling in that case. Perhaps that is where your resistance to accepting it comes from. In moral realism, a justified preference or instrumental / indirect value that weights more can overpower a direct feeling as well. This justified preference will be ultimately reducible to direct feelings in the present or in the future, for oneself or for others, though.

Could you give me examples of any reasonable preferences that could not be reducible to good and bad feelings in that sense?

Anyway, there is also the argument from personal identity which calls for equalization of values taking into account all subjects (equally valued, if ceteris paribus), and their reasoning, if contextually equivalent. This could be in itself a partial refutation of the orthogonality thesis, a refutation in theory and for autonomous and free general superintelligent agents, but not necessarily for imprisoned and tampered ones.

Comment author: JonatasMueller 14 March 2013 01:22:18PM 0 points [-]

I think that this is an important point: the previously argued normative badness of directly accessible bad conscious experiences is not absolute and definitive, or in terms of justifying actions. It should weight on the scale with all other factors involved, even indirect and instrumental ones that could only affect intrinsic goodness or badness in a distant and unclear way.

Comment author: Stuart_Armstrong 13 March 2013 03:46:04PM *  0 points [-]

If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics.

Which is exactly why I critiqued using the word "bad" for the conscious experiences, using "negative" or "unpleasant", words which describe the conscious experience in a similar way without sneaking in normative claims.

I have a personal moral system that isn't too far removed from the one you're espousing (a bit more emphasise on preference).

Could you explain a bit this emphasis on preference?

Er, nothing complex - in my ethics, there are cases where preferences trump feelings (eg experience machines) and cases where feelings trump preferences (eg drug users who are very unhappy). That's all I'm saying.

Comment author: JonatasMueller 13 March 2013 11:36:37PM *  1 point [-]

Bad, negative, unpleasant, all possess partial semantic correspondence, which justifies their being a value.

The normative claims in this case need not be definitive and overruling in that case. Perhaps that is where your resistance to accepting it comes from. In moral realism, a justified preference or instrumental / indirect value that weights more can overpower a direct feeling as well. This justified preference will be ultimately reducible to direct feelings in the present or in the future, for oneself or for others, though.

Could you give me examples of any reasonable preferences that could not be reducible to good and bad feelings in that sense?

Anyway, there is also the argument from personal identity which calls for equalization of values taking into account all subjects (equally valued, if ceteris paribus), and their reasoning, if contextually equivalent. This could be in itself a partial refutation of the orthogonality thesis, a refutation in theory and for autonomous and free general superintelligent agents, but not necessarily for imprisoned and tampered ones.

Comment author: Stuart_Armstrong 13 March 2013 12:53:48PM 0 points [-]

A bad occurrence must be a bad ethical value.

Why? That's an assertion - it won't convince anyone who doesn't already agree with you. And you're using two meanings of the word "bad" - an unpleasant subjective experience, and badness according to a moral system. Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.

I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it's a logical transition.

Could you explain more at length for me?

I have a personal moral system that isn't too far removed from the one you're espousing (a bit more emphasise on preference). However, I do not assume that this moral system can be deduced from universal or logical principles, for the reasons stated above. Most humans will have moral systems not too far removed from ours (in the sense of Kolmogorov complexity - there are many human cultural universals, and our moral instincts are generally similar), but this isn't a logical argument for the correctness of something.

Comment author: JonatasMueller 13 March 2013 03:24:48PM *  0 points [-]

A bad occurrence must be a bad ethical value.

Why? That's an assertion - it won't convince anyone who doesn't already agree with you. And you're using two meanings of the word "bad" - an unpleasant subjective experience, and badness according to a moral system.

If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics. It seems to be a matter of including something in a verbal definition, so it seems to be correct. Moral realism would follow. It is not undesirable, but helpful, since anti-realism implies that our values are not really valuable, but just fiction.

Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.

I agree, this would be a special case, of incomplete knowledge about conscious animals. This would be possible for instance in some artificial intelligences, but they might learn about it indirectly by observing animals, humans, and getting contact with human culture in various forms. Otherwise, they might become morally anti-realist.

I have a personal moral system that isn't too far removed from the one you're espousing (a bit more emphasise on preference).

Could you explain a bit this emphasis on preference?

Comment author: Stuart_Armstrong 12 March 2013 06:35:35PM 1 point [-]

Did you read my article Arguments against the Orthogonality Thesis?

Of course!

Likewise, an experience of extreme success or pleasure is intrinsically felt as good, and this goodness is a real occurrence in the world.

-> Likewise, an experience of extreme success or pleasure is often intrinsically felt as good, and this feeling of goodness is a real occurrence in the world.

And that renders the 4th point moot - your extra axiom (the one that goes from "is" to "ought") is "feelings of goodness are actually goodness". I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it's a logical transition.

Comment author: JonatasMueller 12 March 2013 11:51:34PM 1 point [-]

This is a relevant discussion in another thread, by the way:

http://lesswrong.com/lw/gu1/decision_theory_faq/8lt9?context=3

Comment author: Eliezer_Yudkowsky 12 March 2013 10:28:00PM 2 points [-]

Heh. Yes, I remember reading the section on noradrenergic vs. dopaminergic motivation in Pearce's BLTC as a 16-year-old. I used to be a Pearcean, ya know, hence the Superhappies. But that distinction didn't seem very relevant to the metaethical debate at hand.

Comment author: JonatasMueller 12 March 2013 10:39:02PM 0 points [-]

I thought it was relevant to this, if not, then what was meant by motivation?

The inherent-desirableness of happiness is your mind reifying the internal data describing its motivation to do something

Consciousness is that of which we can be most certain of, and I would rather think that we are living in a virtual world under an universe with other, alien physical laws, than that consciousness itself is not real. If it is not reducible to nonmental facts, then nonmental facts don't seem to account for everything there is of relevant.

From my perspective, this is "supernatural" because your story inherently revolves around mental facts you're not allowed to reduce to nonmental facts - any reduction to nonmental facts will let us construct a mind that doesn't care once the qualia aren't mysteriously irreducibly compelling anymore.

In response to comment by RobbBB on Decision Theory FAQ
Comment author: Eliezer_Yudkowsky 12 March 2013 09:54:32PM 14 points [-]

I'm not sure this taxonomy is helpful from David Pearce's perspective. David Pearce's position is that there are universally motivating facts - facts whose truth, once known, is compelling for every possible sort of mind. This reifies his observation that the desire for happiness feels really, actually compelling to him and this compellingness seems innate to qualia, so anyone who truly knew the facts about the quale would also know that compelling sense and act accordingly. This may not correspond exactly to what SEP says under moral realism and let me know if there's a standard term, but realism seems to describe the Pearcean (or Eliezer circa 1996) feeling about the subject - that happiness is really intrinsically preferable, that this is truth and not opinion.

From my perspective this is a confusion which I claim to fully and exactly understand, which licenses my definite rejection of the hypothesis. (The dawning of this understanding did in fact cause my definite rejection of the hypothesis in 2003.) The inherent-desirableness of happiness is your mind reifying the internal data describing its motivation to do something, so if you try to use your empathy to imagine another mind fully understanding this mysterious opaque data (quale) whose content is actually your internal code for "compelled to do that", you imagine the mind being compelled to do that. You'll be agnostic about whether or not this seems supernatural because you don't actually know where the mysterious compellingness comes from. From my perspective, this is "supernatural" because your story inherently revolves around mental facts you're not allowed to reduce to nonmental facts - any reduction to nonmental facts will let us construct a mind that doesn't care once the qualia aren't mysteriously irreducibly compelling anymore. But this is a judgment I pass from reductionist knowledge - from a Pearcean perspective, there's just a mysteriously compelling quality about happiness, and to know this quale seems identical with being compelled by it; that's all your story. Well, that plus the fact that anyone who says that some minds might not be compelled by happiness, seems to be asserting that happiness is objectively unimportant or that its rightness is a matter of mere opinion, which is obviously intuitively false. (As a moral cognitivist, of course, I agree that happiness is objectively important, I just know that "important" is a judgment about a certain logical truth that other minds do not find compelling. Since in fact nothing can be intrinsically compelling to all minds, I have decided not to be an error theorist as I would have to be if I took this impossible quality of intrinsic compellingness to be an unavoidable requirement of things being good, right, valuable, or important in the intuitive emotional sense. My old intuitive confusion about qualia doesn't seem worth respecting so much that I must now be indifferent between a universe of happiness vs. a universe of paperclips. The former is still better, it's just that now I know what "better" means.)

But if the very definitions of the debate are not automatically to judge in my favor, then we should have a term for what Pearce believes that reflects what Pearce thinks to be the case. "Moral realism" seems like a good term for "the existence of facts the knowledge of which is intrinsically and universally compelling, such as happiness and subjective desire". It may not describe what a moral cognitivist thinks is really going on, but "realism" seems to describe the feeling as it would occur to Pearce or Eliezer-1996. If not this term, then what? "Moral non-naturalism" is what a moral cognitivist says to deconstruct your theory - the self-evident intrinsic compellingness of happiness quales doesn't feel like asserting "non-naturalism" to David Pearce, although you could have a non-natural theory about how this mysterious observation was generated.

Comment author: JonatasMueller 12 March 2013 10:22:53PM *  0 points [-]

It's a reasonably good description, though wanting and liking seem to be neurologically separate, such that liking does not necessarily reflect a motivation, nor vice-versa (see: Not for the sake of pleasure alone. Think the pleasurable but non-motivating effect of opioids such as heroin. Even in cases in which wanting and liking occur together, this does not necessarily invalidate the liking aspect as purely wanting.

Liking and disliking, good and bad feelings as qualia, especially in very intense amounts, seem to be intrinsically so to those who are immediately feeling them. Reasoning could extend and generalize this.

In response to comment by Larks on Decision Theory FAQ
Comment author: Pablo_Stafforini 12 March 2013 08:19:10PM *  3 points [-]

Rationality may imply moral conclusions in the same sense that it implies some factual conclusions: we think that folks who believe in creationism are irrational, because we think the evidence for evolution is sufficiently strong and also think that evolution is incompatible with creationism. Analogously, if the evidence for some moral truth is sufficiently strong, we may similarly accuse of irrationality those who fail to form their beliefs accordingly. So it is misleading to say that "rationality doesn't itself imply moral conclusions".

Comment author: JonatasMueller 12 March 2013 09:22:35PM 2 points [-]
Comment author: Stuart_Armstrong 12 March 2013 06:35:35PM 1 point [-]

Did you read my article Arguments against the Orthogonality Thesis?

Of course!

Likewise, an experience of extreme success or pleasure is intrinsically felt as good, and this goodness is a real occurrence in the world.

-> Likewise, an experience of extreme success or pleasure is often intrinsically felt as good, and this feeling of goodness is a real occurrence in the world.

And that renders the 4th point moot - your extra axiom (the one that goes from "is" to "ought") is "feelings of goodness are actually goodness". I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it's a logical transition.

Comment author: JonatasMueller 12 March 2013 08:43:59PM *  -1 points [-]

I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it's a logical transition.

Could you explain more at length for me?

The feeling of badness is something bad (imagine yourself or someone being tortured and tell me it's not bad), and it is a real occurrence, because conscious contents are real occurrences. It is then a bad occurrence. A bad occurrence must be a bad ethical value. All this is data, since conscious perceptions have a directly accessible nature, they are "is", and the "ought" is part of the definition of ethical value, that what is good ought to be promoted, and what is bad ought to be avoided.

This does not mean that we should seek direct good and avoid direct bad on the immediate present, such as making parties to no end, but it means that we should seek it in the present and the future, seeking indirect values such as working, learning, promoting peace and equality, so that the future, even in the longest-term, will have direct value.

(To the anonymous users who down-voted this, do me the favor of posting a comment saying why you disagree, if you are sure that you are right and I am wrong, otherwise it's just rudeness, the down-vote should be used as a censoring mechanism for inappropriate posts rather than to express disagreement with a reasonable point of view. I'm using my time to freely explain this as a favor to whoever is reading, and it's a bit insulting and bad mannered to down-vote it).

Comment author: Stuart_Armstrong 12 March 2013 11:18:03AM 1 point [-]

I don't think that someone can disagree with it (good conscious feelings are intrinsically good; bad conscious feelings are intrinsically bad), because it would be akin to disagreeing that, for instance, the color green feels greenish. Do you disagree with it?

I do disagree with it! :-) Here is what I agree with:

  • That humans have positive and negative conscious experiences.
  • That humans have an innate sense that morality exists: that good and bad mean something.
  • That humans have preferences.

I'll also agree that preferences often (but not always) track the positive or negative conscious experiences of that human. That human impressions of good and bad sometimes (but not always) track positive or negative conscious experiences of humans in general, at least approximately.

But I don't see any grounds for saying "positive conscious experiences are intrinsically (or logically) good". That seems to be putting in far to many extra connotations, and moving far beyond the facts we know.

Comment author: JonatasMueller 12 March 2013 05:39:19PM *  0 points [-]

I agree with what you agree with.

Did you read my article Arguments against the Orthogonality Thesis?

I think that the argument for the intrinsic value (goodness or badness) of conscious feelings goes like this:

  1. Conscious experiences are real, and are the most certain data about the world, because they are directly accessible, and don't depend on inference, unlike the external world as we perceive it. It would not be possible to dismiss conscious experiences as unreal, inferring that they not be part of the external world, since they are more certain than the external world is. The external world could be an illusion, and we could be living inside a simulated virtual world, in an underlying universe that be alien and with different physical laws.

  2. Even though conscious experiences are representations (sometimes of external physical states, sometimes of abstract internal states), apart from what they represent they do exist in themselves as real phenomena (likely physical).

  3. Conscious experiences can be felt as intrinsically neutral, good, or bad in value, sometimes intensely so. For example, the bad value of having deep surgery without anesthesia is felt as intrinsically and intensely bad, and this badness is a real occurrence in the world. Likewise, an experience of extreme success or pleasure is intrinsically felt as good, and this goodness is a real occurrence in the world.

  4. Ethical value is, by definition, what is good and what is bad. We have directly accessible data of occurrences of intrinsic goodness and badness. They are ethical value.

View more: Next