thomblake comments on Attention Less Wrong: We need an FAQ - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (108)
Wait, doesn't Eliezer claim there's an objective morality?
I would describe Eliezer's position as
standard relativism,
minus the popular confusion that relativism means that you would or could choose to find no moral arguments compelling,
plus the belief that nearly all humans would, with sufficient reflection, find nearly the same moral arguments compelling because of our shared genetic heritage.
Eliezer objects to being called a relativist, but I think that this is just semantics.
The third bullet goes so far beyond relativism that it seems quite justified to deny the word. If just about everyone everywhere is observed to have a substantial commonality in what they think right or wrong (whether or not genetic heritage has anything to do with it), then that's enough to call it objective, even if we do not know why it is so, how it came to be, or how it works. Knowledge may be imperfect, and people may disagree about it, but that does not mean that there is nothing that it is knowledge about.
We can imagine Paperclippers, Pebblesorters, Baby Eaters, and Superhappies, but I don't take these imagined beings seriously except as interesting thought experiments, to be trumped if and when we actually encounter intelligent aliens.
(BTW, regarding accessibility to newcomers: I just made four references that will be immediately obvious to any long-time reader, but completely opaque to any newcomer. A glossary page would be a good idea.)
Paperclippers
Pebblesorters
Baby Eaters and Superhappies
He's a subjectivist as well.
I think that's what Tyrrell means by standard relativism
Well they're different.
And Eliezer is both.
This partially depends on where you place 'ethics'. If ethics is worried about "what's right" in Eliezer's terms, then it's not relativist at all - the pebble-sorters are doing something entirely different from ethics when they argue.
However, if you think the pebble-sorters are trying to answer the question "what should I do" and properly come up with answers that are prime, and you think that answering that question is what ethics is about, then Eliezer is some sort of relativist.
And the answers to these questions will inform the question about subjectivism. In the first case, clearly what's right doesn't depend upon what anybody thinks about what's right. - it's a non-relativist objectivism.
In the second case, there is still room to ask whether the correct answer to the pebblesorters asking "what should I do" depends upon their thoughts on the matter, or if it's something non-mental that determines they should do what's prime; thus, it could be an objective or subjective relativism.
I've don't know of any relativists who aren't subjectivists. That article points out that non-subjectivist relativism is a logical possibility, but the article doesn't give any actual examples of someone defending such a position. I wonder if any exist.
Hobbes might be a candidate if you're okay with distinguishing laws and dictates from the mental states of rulers.
The article does give an example: cultural relativism. Its objective in that it doesn't depend on the mind of the individual, but it's still relative to something: the culture you are in.
That is not how I read it. There's a big parenthetical aside breaking up the flow, but excising that leaves
(Bolding added.) So, either individualistic or cultural relativisms can be subjectivist. That leaves the possibility, in principle, that either could be non-subjectivist, but the article gives no example of someone actually staking out such a position.
You continue:
I think that cultural relativism is mind-dependent in the sense that the article uses the term.
ok, location relativism then. It's doesn't depend on your what's going on inside your head, but it's still relative.
But is anyone a location-relativist for reasons that don't derive from being a cultural-relativist or a "sovereign-command" relativist (according to which the moral is whatever someone with lawful authority over you says it is)?
Now that I think of it, though, certain kinds of non-subjectivist relativism are probably very common, if rarely defended by philosphers. I'm thinking of the claim that morality is whatever maximizes your genetic fitness, or morality is whatever maximizes your financial earnings (even if you have no desire for genetic fitness or financial earnings).
These are relativisms because something might increase your genetic fitness (say) while it decreases mine. But they are not subjectivist because they measure morality according to something independent of anyone's state of mind.
I'm confused by the terminology, but I think I would be a relativist objectivist.
I certainly think that morality is relative -- what is moral is agent-dependent -- but whether or not the agent is behaving morally is an objective fact about that agent's behavior, because the behavior either does or doesn't conform with that agent's morality.
But I don't think the distinction between a relativist objectivist and a relativist subjectivist is terribly exciting: it just depends on whether you consider an agent 'moral' if it conforms to its morality (relativist objectivist) or yours (relativist subjectivist).
But maybe I've got it wrong, because this view seems so reasonable, whereas you've indicated that it's rare.
The key phrase for subjectivism is "mind dependent" so if you think other people's morality comes from their minds then you are a relativist subjectivist.
I just realized I don't think people should conform to their own morality, I think people should conform to my morality which I guess would make me a subjective non-relativist.
So you believe that the word morality is a two-place word and means what an agent would want to do under certain circumstances? What word do you use to means what actually ought to be done? The particular thing that you and, to a large degree all humans would want to do under specified circumstances? Or do you believe there isn't anything that should be done other than what whatever agents exist want? Please note that that position is also a statement about what the universe ought to look like.
Yes, morality is a two-place word -- the evaluation function of whether an action is moral has two inputs: agent, action. "Agent" can be replaced by anything that conceivably has agency, so morality can be considered system-dependent, where systems include social groups and all humanity, etc.
I wouldn't say morality is what the agent wants to do, but is what the agent ought to do, given its preferences. So I think I am still using it in the usual sense.
I can talk about what I ought to do, but it seems to me I can't talk about what another agent ought to do outside their system of preferences. If I had their preferences, I ought to do what they ought to do. If they had my preferences, they ought to do what I ought to do. But to consider what they ought to do, with some mixture of preferences, isn't incoherent.
I can have a preference for what another agent does, of course, but this is different than asserting a morality. For example, if they don't do what I think is moral, I'm not morally culpable. I don't have their agency.
As far as I can tell, we don't disagree on any matter of fact. I agree that we can only optimize our own actions. I agree that other agents won't necessarily find our moral arguments persuasive. I just don't agree that the words moral and ought should be used the way you do.
To the greater LW community: Is there some way we can come up with standard terminology for this sort of thing? I myself have moved toward using the terminology used by Eliezer, but not everyone has. Are there severe objections to his terminology and if so, are there any other terminologies you think we should adopt as standard?
You're thinking of the wrong sense of objective. An objective morality, according to this article, is a morality that doesn't depend on the subject's mind. It depends on something else. I.e., if we were trying to determine what should_byrnema is, we wouldn't look at you're preferences, instead we would look somewhere else. So for example:
A nonrelativist objectivist would say that we would look at the one true universially compelling morality that's written into the fabric of reality (or something like that). So should_byrnema is just should, period.
A relativist objectivist might say (this is just one example - cultural relativism), that we would look for should_byrnema in the culture that you are currently embedded in. So should_byrnema is should_culture.
I'm not sure that subjective nonrelativism is a possibility though.
I think "subjective" means based on opinion (a mind's assessment).
If Megan-is-moral if she thinks she's moral, then the morality of Megan is subjective and depends on her mind. If Megan is moral if I think she's moral, then it's subjective and depends on my mind.
I think that whether an agent is moral or not is a fact, and doesn't depend upon the opinion/assessment of any mind. But we would still look at the agent's preferences to determine the fact. I thought this was already described by the word 'relative'.
"Subjective" has many meanings. The article uses "subjective" to mean dependent on the mind in any way. Not just a mind's assessment.
Given this definition of subjective, the article would classify your last paragraph as an example of subjective relativism.
Surely it's a logical possibility. Stipulate: "What's right is either X or Y, where we ask each person in the universe to think of a random integer, sum them, and pull off the last bit, 0 meaning X is right and 1 meaning Y is right."
ETA: CEV, perhaps?
Wouldn't "Everyone should do what my moral code says they should" be subjective nonrelativism? Surely there are lots of people who believe that.
Is CEV even an ethical theory? I thought it was more of an algorithm for extracting human preferences to put them in an AI.
well then, I'm just not imaginative enough!
I'm fairly certain you could find people implicitly arguing for some varieties of non-subjective relativism. For example, cultural relativism advances the view that one's culture determines the facts about ethics for oneself, but it's not necessarily mental acts on the part of persons in the culture that determine the facts about ethics. Similarly, Divine Command Theory will give you different answers for different gods, but it's not the mental acts of the persons involved that determine the facts about ethics.
It's an interesting question. The SEP link in Jack's comment actually gives Divine Command Theory as an example of non-relativistic subjectivism. It's subjectivist because what is moral depends on a mental fact about that god — namely, whether that god approves.
It's less clear whether cultural relativism is subjectivist. I'm inclined to think of culture as depending to a large extent on the minds of the people in that culture. (Different peoples whose mental content differed in the right way would have different cultures, even if their material conditions were otherwise identical.) This would make cultural relativism subjectivist as well.
Indeed, I was glossing over that distinction; if you think cultures or God have mental states, then that's a different story. There's also a question of how much "subjectivism" really depends on the relevant minds, and in what way.
I could construct further examples, but we already understand it's logically possible, so that would not be of any help if nobody is advocating them. I think the well has run dry on my end w.r.t examples of relativism in the wild.
Ah, i see. I had always understood relativism to mean what the article calls subjective relativism.
If we're talking about the meanings of terms, how is semantics not a relevant question?
You asked what Eliezer claims, not for the words that he uses to claim it.
Objective in the sense that you can point to it, but can't make it up according to your whims. But not objective in the sense of "being written into the fabric of the universe" or that every single agent, with enough reflection, would realize that its the "correct" morality.
I still haven't gotten through the metaethics sequence yet, so I can't answer that exactly, but if he believed in an "objective" morality (i.e. some definition of "should" that is meaningful from the perspective of fundamental reality, not based on any facts about minds, or an internally-consistent set of universally compelling moral arguments), then he would probably expect a superintelligence to be smart enough (many times over) to discover it and follow it, and that is quite the opposite of his current position. If I recall correctly, that was his pre-2002 position, and he now considers it a huge mistake.
"Fundamental reality" doesn't have a perspective, so it seems weird to draw the lines there. Rather, there's a fact about what's prime, and the pebblesorters care about that, and there's a fact about what's right, and humans care about that. We can be mistaken about what's right, and we can have disagreements about what's right, and we can change our minds. And given time and progress, we will hopefully get closer to understanding what's right. And if the pebblesorters claim that they care about what's right rather than what's prime, they're factually incorrect.
Of course — I was just doing my best to imagine the mindset of a non-religious person who believes in an objectively objective morality (i.e. that even in the absence of a deity, the universe still somehow imposes moral laws). Admittedly, I don't encounter too many of those (people who think they've devised universally compelling moral arguments are more common; even big-O Objectivists seem to just be an overconfident version of that), but I do still meet them from time to time, e.g. people who manage to believe in things like "natural law" or "natural rights" (as facts about the universe rather than facts about human minds) without theistic belief.
All I was saying was that things like that are what the phrase "objective morality" make me think of, and that Eliezer's conclusions are different enough that I'm not sure they quite fit in the same category. His may be an "objective morality" by our best definitions of "objective" and "morality", but it could make people (especially new people) imagine all the wrong things.
For example, here. Read the whole thing, not just this illustrative quote:
That's a part of the metaethics sequence, to which this posting might be a suitable entry point, which says where he's going, and tells you what to read before going there.
"Objective morality" usually implies some outside force imposing morality, and the debate over metaethics (at least in the wider world of philosophy, if not on LW) is usually presented as a choice between that and relativism. If I'm understanding Eliezer's current position correctly, it's that morality is an objective fact about subjective minds. This quote sums it up quite well:
Unfortunately, when people talk about "objective morality", they're usually talking about the commandments the Lord hath given unto us, or they're talking about coming up with a magical definition of "should" that automatically is the correct one for every being in the universe and doesn't depend on any facts about human minds, or they're talking about their great new fake utility function that correctly compresses all human values (at least all good human values, recursion be damned). I don't know how Eliezer feels about the terminology, but if it were up to me, I'd agree with advising against "claim[ing] an objective morality", if only so that people have to think about what parts of their arguments are more about words than reality.