Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Ozy Frantz wrote a thoughtful response to the idea of weirdness points. Not necessarily disagreeing, but pointing out serious limitations in the idea. Peter Hurford, I think you'll appreciate their insights whether you agree or not.

https://thingofthings.wordpress.com/2015/04/14/on-weird-points/

You, the human, might say we really should pursue beauty and laughter and love (which is clearly very important), and that we p-should sort pebbles (but that doesn't really matter). And that our way of life is really better than the Pebblesorters, although their way of life has the utterly irrelevant property of being p-better.

But the Pebblesorters would say we h-should pursue beauty and laughter and love (boring!), and that we really should sort pebbles (which is the self-evident meaning of life). Further, they will say their way of life is really better than ours, even though ours has some stupid old h-betterness.

I side with you the human, of course, but perhaps it would be better (h-better and p-better) to say we are only h-right, not right without qualification. Of course, from the inside of our minds it feels like we are simply right. But the Pebblesorters feel the same way, and if they're as metaethically astute as us then it seems they are not more likely to be wrong than us.

For what it's worth, my ethic is "You should act on that set of motives which leads to the most overall value." (Very similar to Alonzo Fyfe's desirism, although I define value a bit differently.) On this view, we should pursue beauty and laughter and love, while the Pebblesorters should sort pebbles, on the same definition of "should."

EDIT: Upon reading "No License To Be Human" I am embarrassed to realize my attempted-coining of the term "h-should" in response to this is woefully unoriginal. Every time I think I have an original thought, someone else turns out to have thought of it years earlier!

It seems to me that moral non-realism has come a long way since Nietzsche. I am not sure whether Nietzsche has much substance to add to today's metaethics discourse, though admittedly I've only read one of his books. It was The Genealogy of Morals, which I found entertaining and thought-provoking, but unhelpful to the moral realism vs. non-realism debate.

I agree with the concerns of AndyWood and others who have made similar comments, and I'll be paying attention to see whether the later installments of the metaethics sequence have answered them. Before I read them, here is my own summarized set of concerns. (I apologize if responding to a given part of a sequence before reading the later parts is bad form; please let me know if this is the case.)

Eliezer seems to assume that any two neurologically normal humans would agree on the right function if they were fully rational and sufficiently informed, citing the psychological unity of humankind as support. But even with the present degree of psychological unity, it seems to me fully possible that people's values could truly diverge in quite a few not-fully-reconcilable ways--although perhaps the divergence would be surprisingly small; I just don't know. This is, I think we mostly agree, an open question for further research to explore.

Eliezer's way of viewing morality seems like it would run into trouble if it turns out that two different people really do use two different right functions (such that even their CEVs would diverge from one another). Suppose Bob's right function basically boils down to "does it maximize preference fulfillment?" (or some other utilitarian function) and Sally's right function basically boils down to "does it follow a maxim which can be universally willed by a rational agent?" (or some other deontological function). Suppose Bob and Sally are committed to these functions even though each person is fully rational and sufficiently informed--which does not seem implausible.

In this case, the fact that each of them is using a one-place function is of no help, because they are using different one-place functions. Eliezer would then have no immediately obvious legitimate way to claim that his right function is the truer or better one.

To use a more extreme example: What if the Nazis were completely right, according to their own right function? The moral realist in me wants very much to say surely that either (a) the Nazis' right function is the same as mine, and their normative ethics were mistaken by that very standard (which is Eliezer's view, I think), or (b) the Nazis' normative ethics matched their own right function, but their right function is not merely different from our right function, but is outright inferior to it.

If (a) is false AND if we are still committed to saying the Nazis were really wrong (there is also option (c) the Nazis were not wrong; but I'd like to exhaust the alternatives before seriously considering this as possible), then we need some means of distinguishing between better right functions and crummier right functions. I have some extremely vague ideas about how to do this, but I'm very curious to see what other thinkers, including Eliezer, have come up with. If the Nazis' right function is inferior by some external standard (a standard that is really right) then what is this standard?

(Admittedly, as I understand it, the Nazis had many false beliefs about the Jews, so it may be debatable what their morality would have been if they had been fully rational and sufficiently informed.)

In summary, if we all indeed use the same right function deep down, this would be very convenient--but I worry that it is more convenient than reality really is.

It is sometimes argued that happiness is good and suffering is bad. (This is tentatively my own view, but explaining the meaning of "good" and "bad," defending its truth, and expanding the view to account for the additional categories of "right" and "wrong" is beyond the scope of this comment.)

If this is true, then depending on what kind of truth it is, it may also be true in all possible worlds--and a fortiori, on all possible planets in this universe. Furthermore, if it is true on all possible planets that happiness is good and suffering is bad, this does not preclude the possibility that on some planets, murder and theft might be the best way toward everyone's happiness, while compassion and friendship might lead to everyone's misery. In such a case, then to whatever degree this scenario is theoretically possible, compassion and friendship would be bad, while murder and theft would be good.

Hence we can see that it might be the case that normative ethical truths differ from one planet to another, but metaethical truths are the same everywhere. On one level, this is a kind of moral relativism, but it is also based on an absolute principle. I personally think it is a plausible view, while I admit that this comment provides little exposition and no defense of it.

A great post. It captured a lot of intriguing questions I currently have about ethics. One question I have, which I am curious to see addressed in further posts in this sequence, is: Once we dissolve the question of "fairness" (or "morality" or any other such term) and taboo the term, is there a common referent that all parties are really discussing, or do the parties have fundamentally different and irreconcilable ideas of what fairness (or morality, etc.) is? Is Xannon's "fairness" merely a homonym for Yancy's "fairness" rather than something they could figure out and agree on?

If the different views of "fairness" are irreconcilable, then I am inclined to wonder if moral notions really do generally function (without this intention, oftentimes) as a means for each party to bamboozle the other into given the speaker what she wants, by appealing to a multifaceted "moral" concept that creates the mere illusion of common ground (similar to how "sound" functions in the question of the tree falling). Perhaps Xannon wants agreement, Yancy wants equal division, and there is no common ground between them except for a shared delusion that there is common ground. (I certainly hope this isn't true.)

More generally, what about different ethical systems? Although we can easily rule out non-naturalist systems, if two different moral reductionist systems clash (yet neither contradicts known facts) which one is "best?" How can we answer this question without defining the word "best," and what if the two systems disagree on the definition? It would seem to result in an infinite recursion of criteria disagreements--even between two systems that agree on all the facts. (As I understand it, Luke's discussion on pluralistic moral reductionism is relevant to this, but I have not yet read it and am very distressed that he is apparently never going to finish it.)

I tentatively stand by my own theory of moral reductionism (similar to Fyfe's desirism, with traces of hedonistic utilitarianism and Carrier's goal theory) but it concerns me that different people might be using moral concepts in irreconcilably different ways, and some of those that contradict mine might be equally "legitimate" to mine... After reading the Human's Guide to Words sequence, I am more hesitant to use any kind of appeal to common usage, which is what I'd previously done. My views and arguments may continue to change as I read further, and I try always to be grateful to read things that do this to me.

Anyhow, I expect to enjoy reading the rest of the meta-ethics sequence. (I'll read Luke's perpetually-unfinished meta-ethics sequence afterwards.)

"Instead of creating utility, which is hard, we should all train ourselves to find utility in what we already have."

Why not both?