Stuart_Armstrong comments on Questions for Moral Realists - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (110)
"Wanting can be misleading in terms of the long term or in terms of the internal emotional state with which it is connected, but it cannot be misleading or wrong in itself, in that it is a clear preference."
Indeed, but what separates wanting and liking is that preferences can be wrong, they require no empirical basis, while liking in itself cannot be wrong, and it has an empirical basis.
When rightfully wanting something, that something gets a justification. Liking, understood as good feelings, is a justification, while another is avoiding bad feelings, and this can be causally extended to include instrumental actions that will cause this in indirect ways.
Then how can wanting be wrong? They're there, they're conscious preferences (you can introspect and get them, just as liking), and they have as much empirical basis as liking.
And wanting can be seen as more fundamental - they are your preferences, and inform your actions (along with your world model), whereas using liking to take action involve having a (potentially flawed) mental model of what will increase your good experiences and diminish bad ones.
The game can be continued endlessly - what you're saying is that your moral system revolves around liking, and that the arguments that this should be so are convincing to you. But you can't convince wanters with the same argument - their convictions are different, and neither set of arguments are "logical". It becomes a taste-based debate.
Sorry, I thought you already understood why wanting can be wrong.
Example 1: imagine a person named Eliezer walks to an ice cream stand, and picks a new flavor X. Eliezer wants to try the flavor X of ice cream. Eliezer buys it and eats it. The taste is awful and Eliezer vomits it. Eliezer concludes that wanting can be wrong and that it is different from liking in this sense.
Example 2: imagine Eliezer watched a movie in which some homophobic gangsters go about killing homosexuals. Eliezer gets inspired and wants to kill homosexuals too, so he picks a knife and finds a nice looking young man and prepares to torture and kill him. Eliezer looks at the muscular body of the young man, and starts to feel homosexual urges and desires, and instead he makes love with the homosexual young man. Eliezer concludes that he wanted something wrong and that he had been a bigot and homosexual all along, liking men, but not wanting to kill them.
I understand why those examples are wrong. Because I have certain beliefs (broadly, but not universally, shared). But I don't see how any of those beliefs can be logically deduced.
Quite a lot follows from "positive conscious experiences are intrinsically valuable", but that axiom won't be accepted unless you already partially agree with it anyway.
I don't think that someone can disagree with it (good conscious feelings are intrinsically good; bad conscious feelings are intrinsically bad), because it would be akin to disagreeing that, for instance, the color green feels greenish. Do you disagree with it?
Can you elaborate? I don't understand... Many valid wants or beliefs can be ultimately reduced as to good and bad feelings, in the present or future, for oneself or for others, as instrumental values, such as peace, learning, curiosity, love, security, longevity, health, science...
I do disagree with it! :-) Here is what I agree with:
I'll also agree that preferences often (but not always) track the positive or negative conscious experiences of that human. That human impressions of good and bad sometimes (but not always) track positive or negative conscious experiences of humans in general, at least approximately.
But I don't see any grounds for saying "positive conscious experiences are intrinsically (or logically) good". That seems to be putting in far to many extra connotations, and moving far beyond the facts we know.
I agree with what you agree with.
Did you read my article Arguments against the Orthogonality Thesis?
I think that the argument for the intrinsic value (goodness or badness) of conscious feelings goes like this:
Conscious experiences are real, and are the most certain data about the world, because they are directly accessible, and don't depend on inference, unlike the external world as we perceive it. It would not be possible to dismiss conscious experiences as unreal, inferring that they not be part of the external world, since they are more certain than the external world is. The external world could be an illusion, and we could be living inside a simulated virtual world, in an underlying universe that be alien and with different physical laws.
Even though conscious experiences are representations (sometimes of external physical states, sometimes of abstract internal states), apart from what they represent they do exist in themselves as real phenomena (likely physical).
Conscious experiences can be felt as intrinsically neutral, good, or bad in value, sometimes intensely so. For example, the bad value of having deep surgery without anesthesia is felt as intrinsically and intensely bad, and this badness is a real occurrence in the world. Likewise, an experience of extreme success or pleasure is intrinsically felt as good, and this goodness is a real occurrence in the world.
Ethical value is, by definition, what is good and what is bad. We have directly accessible data of occurrences of intrinsic goodness and badness. They are ethical value.
Of course!
-> Likewise, an experience of extreme success or pleasure is often intrinsically felt as good, and this feeling of goodness is a real occurrence in the world.
And that renders the 4th point moot - your extra axiom (the one that goes from "is" to "ought") is "feelings of goodness are actually goodness". I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it's a logical transition.
This is a relevant discussion in another thread, by the way:
http://lesswrong.com/lw/gu1/decision_theory_faq/8lt9?context=3
Could you explain more at length for me?
The feeling of badness is something bad (imagine yourself or someone being tortured and tell me it's not bad), and it is a real occurrence, because conscious contents are real occurrences. It is then a bad occurrence. A bad occurrence must be a bad ethical value. All this is data, since conscious perceptions have a directly accessible nature, they are "is", and the "ought" is part of the definition of ethical value, that what is good ought to be promoted, and what is bad ought to be avoided.
This does not mean that we should seek direct good and avoid direct bad on the immediate present, such as making parties to no end, but it means that we should seek it in the present and the future, seeking indirect values such as working, learning, promoting peace and equality, so that the future, even in the longest-term, will have direct value.
(To the anonymous users who down-voted this, do me the favor of posting a comment saying why you disagree, if you are sure that you are right and I am wrong, otherwise it's just rudeness, the down-vote should be used as a censoring mechanism for inappropriate posts rather than to express disagreement with a reasonable point of view. I'm using my time to freely explain this as a favor to whoever is reading, and it's a bit insulting and bad mannered to down-vote it).