There seems to be a widespread impression that the metaethics sequence was not very successful as an explanation of Eliezer Yudkowsky's views. It even says so on the wiki. And frankly, I'm puzzled by this... hence the "apparently" in this post's title. When I read the metaethics sequence, it seemed to make perfect sense to me. I can think of a couple things that may have made me different from the average OB/LW reader in this regard:

  1. I read Three Worlds Collide before doing my systematic read-through of the sequences.
  2. I have a background in academic philosophy, so I had a similar thought to Richard Chapell's linking of Eliezer's metaethics to rigid designators independently of Richard.
Reading the comments on the metaethics sequence, though, hasn't enlightened me about what exactly people had a problem with, aside from a lot of arguing about definitions over whether Eliezer counts as a relativist.

What's going on here? I ask mainly because I'm thinking of trying to write a post (or sequence?) explaining the metaethics sequence, and I'm wondering what points I should address, what issues I should look out for, etc.

 

New to LessWrong?

New Comment
231 comments, sorted by Click to highlight new comments since: Today at 4:55 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I think what confuses people is that he

1) claims that morality isn't arbitrary and we can make definitive statements about it

2) Also claims no universally compelling arguments.

The confusion is resolved by realizing that he defines the words "moral" and "good" as roughly equivalent to human CEV.

So according to Eliezer, it's not that Humans think love, pleasure, and equality is Good and paperclippers think paperclips are Good. It's that love, pleasure, and equality are part of the definition of good, while paperclips are just part of the definition of paperclippy. The Paperclipper doesn't think paperclips are good...it simply doesn't care about good, instead pursuing paperclippy.

Thus, moral relativism can be decried while "no universally compelling arguments" can be defended. Under this semantic structure, Paperclipper will just say "okay, sure...killing is immoral, but I don't really care as long as it's paperclippy."

Thus, arguments about morality among humans are analogous to Pebblesorter arguments about which piles are correct. In both cases, there is a correct answer.

It's an entirely semantic confusion.

I suggest that ethicists aught to hav... (read more)

I think what confuses people is that he 1) claims that morality isn't arbitrary and we can make definitive statements about it 2) Also claims no universally compelling arguments.

How does this differ from gustatory preferences?

1a) My preference for vanilla over chocolate ice cream is not arbitrary -- I really do have that preference, and I can't will myself to have a different one, and there are specific physical causes for my preference being what it is. To call the preference 'arbitrary' is like calling gravitation or pencils 'arbitrary', and carries no sting.

1b) My preference is physically instantiated, and we can make definitive statements about it, as about any other natural phenomenon.

2) There is no argument that could force any and all possible minds to like vanilla ice cream.

I raise the analogy because it seems an obvious one to me, so I don't see where the confusion is. Eliezer views ethics the same way just about everyone intuitively view aesthetics -- as a body of facts that can be empirically studied and are not purely a matter of personal opinion or ad-hoc stipulation -- facts, though, that make ineliminable reference to the neurally encoded preferences of specific o... (read more)

8Ishaan10y
Good [1] : The human consensus on morality, the human CEV, the contents of Friendly AI's utility function, "sugar is sweet, love is good". There is one correct definition of Good. "Pebblesorters do not care about good or evil, they care about grouping things into primes. Paperclippers do not care about good or evil, they care about paperclips". Good[2] : An individual's morality, a special subset of an agent's utility function (especially the subset that pertains to how everyone aught to act). "I feel sugar is yummy, but I don't mind if you don't agree. However, I feel love is good, and if you don't agree we can't be friends."... "Pebblesorters think making prime numbered pebble piles is good. Paperclippers think making paperclips is good". (A pebblesorter might selfishly prefer maximize the number of pebble piles that they make themselves, but the same pebblesorter believes everyone aught to act to maximize the total number of pebble piles, rather than selfishly maximizing their own pebble piles. A perfectly good pebblesorter seeks only to maximize pebbles. Selfish pebblesorters hoard resources to maximize their own personal pebble creation. Evil pebblesorters knowingly make non-prime abominations.) Do you see what I mean by "semantic" confusion now? Eliezer (like most moral realists, universalists, etc) is using Good[1]. Those confused by his writing (who are accustomed to descriptive moral relativism, nihilism, etc) are using Good[2]. The maps are actually nearly identical in meaning, but because they are written in different languages it's difficult to see that the maps are nearly identical. I'm suggesting that Good[1] and Good[2] are sufficiently different that people who talk about morality often aught to have different words for them. This is one of those "If a tree falls in the forest does it make a sound" debates, which are utterly useless because they center entirely around the definition of sound. Yup, I agree completely, that's exactly the correct wa
1buybuydandavis10y
Your analysis clearly describes some of my understanding of what EY says. I use "yummy" as a go to analogy for morality as well. But, EY also seems to be making a universalist argument, as least for "normal" humans. Because he talks about abstract computation, leaving particular brains behind, it's just unclear to me whether he's a subjectivist or a universalist. The "no universally compelling argument" applies to Clippy versus us, but is there also no universally compelling argument with all of "us" as well?
0Jack10y
"Universalist" and "Subjectivist" aren't opposed or conflicting terms. "Subjective" simply says that moral statements are really statements about the attitudes or opinions of people (or something else with a mind). The opposing term is "objective". "Universalist" and "relativist" are on a different dimension from subjective and objective. Universal vs. relative is about how variable or not variable morality is. You could have a metaethical theory that morality is both objective and relative. For example, you could define morality as what the law says and it will be relative from country to country as laws differ. You could also have a subjective and universal meta-ethics. Morality judgments could be statements about the attitudes of people but all people could have the same attitudes. I take Eliezer to hold something like the latter-- moral judgments aren't about people's attitudes simpliciter: they're about what they would be if people were perfectly rational and had perfect information (he's hardly the first among philosophers, here). It is possible that the outcome of that would be more or less universal among humans or even a larger group. Or at least it some subset of attitudes might be universal. But I could be wrong about his view: I feel like I just end up reading my view into it whenever I try to describe his.
-2TheAncientGeek10y
If morality varies with individuals, as required by subjectivism, it is not at all universal, so the two are not orthogonal. If morality is relative to groups rather than individuals, it is still relative, Morality is objective when the truth values of moral statements don't vary with individuals or groups, not when it varies with empirically discoverable facts.
0Jack10y
Subjectivism does not require that morality varies with individuals. No, see the link above.
-2TheAncientGeek10y
The link supports what I said. Subjectivism requires that moral claims have truth values which , in principle, dpened on the individual making them. It doesn't mean that any two people will necessarily have a different morality, but why would I assert that?
0Jack10y
This is not true of all subjectivisms, as the link makes totally clear. Subjective simply means that something is mind-dependent; it need not be the mind of the person making the claim-- or not only the mind of the person making the claim. For instance, the facts that determine whether or not a moral claim is true could consist in just the moral opinions and attitudes where all humans overlap.
-2TheAncientGeek10y
There are people who use "subjective" to mean "mental", but they sholudn't.
-2Rob Bensinger10y
If you have in mind 'human universals' when you say 'universality', that's easily patched. Morality is like preferring ice cream in general, rather than like preferring vanilla ice cream. Just about every human likes ice cream. 1. The brain is a computer, hence it runs 'abstract computations'. This is true in essentially the same sense that all piles of five objects are instantiating the same abstract 'fiveness'. If it's mysterious in the case of human morality, it's not only equally mysterious in the case of all recurrent physical processes; it's equally mysterious in the case of all recurrent physical anythings. 2. Some philosophers would say that brain computations are both subjective and objective -- metaphysically subjective, because they involve our mental lives, but epistemically objective, because they can be discovered and verified empirically. For physicalists, however, 'metaphysical subjectivity' is not necessarily a joint-carving concept. And it may be possible for a non-sentient AI to calculate our moral algorithm. So there probably isn't any interesting sense in which morality is subjective, except maybe the sense in which everything computed by an agent is 'subjective'. 3. I don't know anymore what you mean by 'universalism'. There are universally compelling arguments for all adolescent or adult humans of sound mind. (And many pre-adolescent humans, and many humans of unsound mind.)
3Tyrrell_McAllister10y
This leaves out the "rigid designator" bit that people are discussing up-thread. Your formulation invites the response, "So, if our CEV were different, then different things would be good?" Eliezer wants the answer to this to be "No." Perhaps we can say that "Eliezer-Good" is roughly synonymous to "Our CEV as it actually is in this, the actual, world as this world is right now." Thus, if our CEV were different, we would be in a different possible world, and so our CEV in that world would not determine what is good. Even in that different, non-actual, possible world, what is good would be determined by what our actual CEV says is good in this, the actual, world.
1Eugine_Nier10y
Both these statements are also true about physics, yet nobody seems to be confused about it in that case.
1Ishaan10y
What do you mean? Rational agents aught to converge upon what physics is.
-1Eugine_Nier10y
Only because that's considered part of the definition of "rational agent".
2Ishaan10y
Yes? But the recipient of an "argument" is implicitly an agent who at least partially understands epistemology. There is not much point in talking about agents which aren't rational or at least partly-bounded-rational-ish. Completely insensible things are better modeled as objects, not agents, and you can't argue with an object.
0TheAncientGeek10y
And can aliens have love and pleasure, or is Good a purely human concept?
1Ishaan10y
By Eliezer's usage? I'd say aliens might have love and pleasure in the same way that aliens might have legs...they just as easily might not. Think "wolf" vs "snake" - one has legs and feels love while the other does not.
0TheAncientGeek10y
Let's say they have love and pleasure. Then why would want to define morality in a human centric way?
0TheAncientGeek10y
That isn't non-relativism. Subjectivism is the claim that the truth of moral statements varies with the person making them. That is compatible with the claim that they are non-arbitrary, since they may be fixed by features of persons that they cannot change, and which can be objectively discovered. It isn't a particularly strong version of subjectivism, though. That is;'t non-realism. Non-realism means that there are no arguments or evidence that will compel suitably equipped and motivated agents. The CEV of individual humans, or humanity? You have been ambiguous about an important subject EY is also ambiguous about.
1Ishaan10y
I'm ambiguous about it because I'm describing EY's usage of the word, and he's been ambiguous about it. I typically adapt my usage to the person who I'm talking to, but the way that I typically define "good" in my own head is: "The subset of my preferences which do not in any way reference myself as a person"...or in other words, the behavior which I would prefer if I cared about everyone equally (If I was not selfish and didn't prefer my in-group). Under my usage, different people can have different conceptions of good. "Good" is a function of the agent making the judgement. A pebble-sorter might selfishly want to make every pebble pile themselves, but they also might think that increasing the total number of pebble piles in general is "good". Then, according to the Pebblesorters, a "good" pebble-sorter would put overall-prime-pebble-pile-maximization above their own personal -prime-pebble-pile-productivity. According to the Babyeaters, "good" baby-eater would eat babies indiscriminately, even if they selfishly might want to spare their own. According to humans, Pebble sorter values are alien and baby-eater values are evil.
0passive_fist10y
I think you're right here. He's saying, in a way, that moral absolutism only makes sense within context. Hence metaethics. It's kinda hard to wrap one's head around but it does make sense.
-1TAG3y
The question of what EY means is entangled with the question of why he thinks it's true. This account of his meaning ..is pretty incredible as an argument, because it appears to be an argument by definition...in fact, an argument by normative and novel definition...and he hates arguments by definition. Well, even if they are not all bad , his argument-by-definition is not one of the good ones, because it's not based on an accepted or common definition. Inasmuch as it's both a novel theory, and based on a definition, it's based on a novel definition.

I remain confused by Eliezer's metaethics sequence.

Both there and in By Which It May Be Judged, I see Eliezer successfully arguing that (something like) moral realism is possible in a reductionist universe (I agree), but he also seems to want to say that in fact (something like) moral realism actually obtains, and I don't understand what the argument for that is. In particular, one way (the way?) his metaethics might spit up something that looks a lot like moral realism is if there is strong convergence of values upon (human-ish?) agents receiving better information, time enough to work out contradictions in their values, etc. But the "strong convergence of values" thesis hasn't really been argued, so I remain unclear as to why Eliezer finds it plausible.

Basically, I read the metaethics sequence as asserting both things but arguing only for the first.

But I'm not sure about this. Perhaps because I was already familiar with the professional metaethics vocabulary when I read the sequence, I found Eliezer's vocabulary for talking about positions in metaethics confusing.

I meant to explore these issues in a vocabulary I find more clear, in my own metaethics sequence, but I still haven't got around to it. :(

6komponisto10y
(I'm putting this as a reply to your comment because your comment is what made me think of it.) In my view, Eliezer's "metaethics" sequence, despite its name, argues for his ethical theory, roughly (1) morality[humans] = CEV[humans] (N.B.: this is my terminology; Eliezer would write "morality" where I write "morality[humans]") without ever arguing for his (implied) metaethical theory, which is something like (2) for all X, morality[X] = CEV[X]. Worse, much of his effort is spent arguing against propositions like (3) (1) => for all X, morality[X] = CEV[humans] (The Bedrock of Morality: Arbitrary?) and (4) (1) => morality[humans] = CEV["humans"] (No License To Be Human) which, I feel, are beside the point.
4Douglas_Knight10y
Yes; what else would you do in metaethics? Isn't its job to point to ethical theories, while the job of ethics is to assume you have agreed on a theory (an often false assumption)?
0komponisto10y
Ethics is the subject in which you argue about which ethical theory is correct. In meta-ethics, you argue about how you would know if an ethical theory were correct, and/or what it would mean for an ethical theory to be correct, etc. See here for a previous comment of mine on this.
2Douglas_Knight10y
First, is ethics only about decision procedures? The existence of the concept of moral luck suggests not. Sure, you can say lots of people are wrong, but to banish them from the field of ethics is ridiculous. Virtue ethics is another example, less clearly a counterexample, but much more central. The three level hierarchy at your link does nothing to tell what belongs in meta-ethics and what belongs in ethics. I don't think your comment here is consistent with your comment there and I don't think either comment has much to do with the three level hierarchy. Meta-ethics is about issues that are logically prior to ethics. I reject your list. If there are disagreements about the logical priority of issues, then there should be disagreements about what constitutes meta-ethics. You could have a convention that meta-ethics is defined as a certain list of topics by tradition, but that's stupid. In particular, I think consequentialism vs deontology has high logical priority. Maybe you disagree with me, but to say that I am wrong by definition is not helpful. Going back to Eliezer, I think that he does only cover meta-ethical claims and that they do pin down an ethical theory. Maybe other meta-ethical stances would not uniquely do so (contrary to my previous comment), but his do.
-2komponisto10y
It may not surprise you to learn that I am of the school that rejects the concept of moral luck. (In this I think I align with Eliezer.) This is unobjectionable provided that one agrees about what ethics consists of. As far as I am aware, standard philosophical terminology labels utilitarianism (for example) as an ethical theory; yet I have seen people on LW refer to "utilitarian meta-ethics". This is the kind of usage I mean to disapprove of, and I hold Eliezer under suspicion of encouraging it by blurring the distinction in his sequence. I should be clear about the fact that this is a terminological issue; my interest here is mainly in preserving the integrity of the prefix "meta", which I think has suffered excessive abuse both here and elsewhere. For whatever reason, Eliezer's use of the term felt abusive to me. Part of the problem may be that Eliezer seemed to think the concept of rigid designation was the important issue, as opposed to e.g. the orthogonality thesis, and I found this perplexing (and uncharacteristic of him). Discomfort about this may have contributed to my perception that meta-ethics wasn't really the topic of his sequence, so that his calling it that was "off". But this is admittedly distinct from my claim that his thesis is ethical rather than meta-ethical. This is again a terminological point, but I think a sequence should be named after the conclusion rather than the premises. If his meta-ethical stance pins down an ethical theory, he should have called the sequence explaining it his "ethics" sequence; just as if I use my theory of art history to derive my theory of physics, then my sequence explaining it should be my "physics" sequence rather than my "art history" sequence.
0Douglas_Knight10y
You demand that everyone accept your definition of ethics, excluding moral luck from the subject, but you simultaneously demand that meta-ethics be defined by convention. I said both of those points (but not their conjunction) in my previous comment, after explicitly anticipating what you say here and I'm rather annoyed that you ignored it. I guess the lesson is to say as little as possible.
0komponisto10y
Now just hold on a second. You are arguing by uncharitable formulation, implying that there is tension between two claims when, logically, there is none. (Forgive me for not assuming you were doing that, and thereby, according to you, "ignoring" your previous comment.) There is nothing contradictory about holding that (1) ethical theories that include moral luck are wrong; and (2) utilitarianism is an ethical theory and not a meta-ethical theory. (1) is an ethical claim. (2) is the conjunction of a meta-ethical claim ("utilitarianism is an ethical theory") and a meta-meta-ethical claim ("utilitarianism is not a meta-ethical theory"). ( I hereby declare this comment to supersede all of my previous comments on the subject of the distinction between ethics and meta-ethics, insofar as there is any inconsistency; and in the event there is any inconsistency, I pre-emptively cede you dialectical victory except insofar as doing so would contradict anything else I have said in this comment.)
2Douglas_Knight10y
OK, if you've abandoned your claim that I "consequentialism is not a meta-ethical attribute," is true by convention, then that's fine. I'll just disagree with it and keep including consequentialism vs deontology in meta-ethics, just as I'll keep including moral luck in ethics.
-1TheAncientGeek10y
"In philosophy, meta-ethics is the branch of ethics that seeks to understand the nature of ethical properties, statements, attitudes, and judgments. Meta-ethics is one of the three branches of ethics generally recognized by philosophers, the others being normative ethics and applied ethics. While normative ethics addresses such questions as "What should one do?", thus endorsing some ethical evaluations and rejecting others, meta-ethics addresses questions such as "What is goodness?" and "How can we tell what is good from what is bad?", seeking to understand the nature of ethical properties and evaluations."
3TheOtherDave10y
I would be surprised if Eliezer believed (1) or (2), as distinct from believing that CEV[X] is the most viably actionable approximation of morality[X] (using your terminology) we've come up with thus far. This reminds me somewhat of the difference between believing that 2013 cryonics technology reliably preserves the information content of a brain on the one hand, and on the other believing that 2013 cryonics technology has a higher chance of preserving the information than burial or cremation. I agree that that he devotes a lot of time to arguing against (3), though I've always understood that as a reaction to the "but a superintelligent system would be smart enough to just figure out how to behave ethically and then do it!" crowd. I'm not really sure what you mean by (4).
5komponisto10y
I didn't intend to distinguish that finely. (4) is intended to mean that if we alter humans to have a different value system tomorrow, we would also be changing what we mean (today) by "morality". It's the negation of the assertion that moral terms are rigid designators, and is what Eliezer is arguing against in No License To Be Human.
2TheOtherDave10y
Ah, gotcha. OK, thanks for clarifying.
6Ishaan10y
I don't think you're "confused" about what was meant. I think you understood exactly what was meant, and have identified a real (and, I believe, acknowledged?) problem with the moral realist definition of Good. The assumption is that "if we knew more, thought faster, were more the people we wished we were, had grown up farther together” then a very large number of humans would converge onto moral agreement. The assumption is that if you take a culture that practiced, say, human torture and sacrifice, into our economy, and give them the resources to live at a level of luxury similar to what we experience today and all of our knowledge, they would grow more intelligent, more globally aware, and their morality would slowly shift to become more like ours even in the absence of outside pressure. Our morality, however, would not shift to become more like theirs. It seems like an empirical question. Alternatively, we could bite the bullet and just say that some humans simply end up with alien values that are not "good",
0TheAncientGeek10y
It's not the assumption that is good or bad, but the quality of argument provided for it.
-2Moss_Piglet10y
Seeing as about 1% of the population are estimated to be psychopaths, not to mention pathological narcissists megalomaniacs etc, it seems hard to argue that there isn't a large (if statistically insignificant) portion of the population who are natural ethical egoists rather than altruists. You could try to weasel around it like Mr Yudkowski does, saying that they are not "neurologically intact," except that there is evidence that psychopathy at least is a stable evolutionary strategy rather than a malfunction of normal systems. I'm usually not one to play the "evil psychopaths" card online, mainly because it's crass and diminishes the meaning of a useful medical term, but it's pretty applicable here. What exactly happens to all the psychopaths and people with psychopathic traits when you start extrapolating human values?
7Ishaan10y
Why even stop at psychopaths? There are perfectly neurotypical people with strong desires for revenge-based justice, purity norms that I strongly dislike, etc. I'm not extremely confident that extrapolation will dissolve these values into deeper-order values, although my perception that intelligence in humans does at least seem to be correlated to values similar to mine is comforting in this respect. Although really, I think this is reaching the point where we have to stop talking in terms of idealized agents with values and start thinking about how these models can be mapped to actual meat brains. Well, under the shaky assumption that we have the ability to extrapolate in the first place, in practice what happens is that whoever controls the extrapolation sets which values are to be extrapolated, and they have a very strong incentive to put in only their own values. By definition, no one wants to implement the CEV of humanity more than they want to implement their own CEV. But I would hope that most of the worlds impacted by the various human's CEVs would be a pretty nice places to live.
0[anonymous]10y
That depends. The more interconnected our lives become, the harder it gets to enhance the life of myself or my loved ones through highly localized improvements. Once you get up to a sufficiently high level (vaccination programs are an obvious example), helping yourself and your loved ones is easiest to accomplish by helping everyone all together, because of the ripple effects down to my loved ones' loved ones thus having an effect on my loved ones, whom I value unto themselves. Favoring individual volition versus a group volition could be a matter of social-graph connectedness and weighting: it could be that for a sufficiently connected individual with sufficiently strong value-weight placed on social ties, that individual will feel better about sacrificing some personal preferences to admit their connections' values rather than simply subjecting their own close social connections to their personal volition.
1Ishaan10y
Then they have an altruistic EV. That's allowed. But as far as your preference goes, your EV >= any other CEV. It has to be that way, tautologically. Extrapolated Volition is defined as what you would choose to do in the counter-factual scenario where you have more intelligence, knowledge, etc than you do now. If you're totally altruistic, it might be that your EV is the CEV of humanity, but that means that you have no preference, not that you prefer humanity's CEV over your own. Remember, all your preferences, including the moral and altruistic ones, are included in your EV.
2[anonymous]10y
Sorry, I don't think I'm being clear. The notion I'm trying to express is not an entirely altruistic EV, or even a deliberately altruistic EV. Simply, this person has friends and family and such, and thus has a partially social EV; this person is at least altruistic towards close associates when it costs them nothing. My claim, then, is that if we denote the n = number of hops from any one person to any other in the social graph of such agents: lim_{n->0} Social Component of Personal EV = species-wide CEV Now, there may be special cases, such as people who don't give a shit about anyone but themselves, but the idea is that as social connectedness grows, benefitting only myself and my loved ones becomes more and more expensive and unwieldly (for instance, income inequality and guard labor already have sizable, well-studied economic costs, and that's before we're talking about potential improvements to the human condition from AI!) compared to just doing things that are good for everyone without regard to people's connection to myself (they're bound to connect through a mutual friend or relative with some low degree, after all) or social status (because again, status enforcement is expensive). So while the total degree to which I care about other people is limited (Social Component of Personal EV <= Personal EV), eventually that component should approximate the CEV of everyone reachable from me in the social graph. The question, then, becomes whether that Social Component of my Personal EV is large enough to overwhelm some of my own personal preferences (I participate in a broader society voluntarily) or whether my personal values overwhelm my consideration of other people's feelings (I conquer the world and crush you beneath my feet).
5Viliam_Bur10y
Seems to me that to a significant degree the psychopaths are successful because people around them have problems communicating. Information about what the specific psychopath did to whom are usually not shared. If they were easily accessible to people before interacting with the psychopath, a lot of their power would be lost. Despite being introverted by nature, these days my heuristics for dealing with problematic people is to establish good communication lines among the non-problematic people. Then people often realize that what seemed like their specific problem is in fact almost everyone's problem with the same person, following the same pattern. When a former mystery becomes an obvious algorithm, it is easier to think about a counter-strategy. Sometimes the mentally different person beats you not by using a strategy so complex you wouldn't understand it, but by using a relatively simple strategy that is so weird to you that you just don't notice it in the hypothesis space (and instead you imagine something more complex and powerful). But once you have enough data to understand the strategy, sometimes you can find and exploit its flaws. A specific example of a powerful yet vulnerable strategy is lying strategically to everyone around you and establishing yourself as the only channel of information between different groups of people. Then you can make the group A believe the group B are idiots and vice versa, and make both groups see you as their secret ally. Your strategy can be stable for a long time, because when the groups believe each other to be idiots, they naturally avoid communicating with each other; and when they do, they realize the other side has completely wrong information, which they attribute to the other side's stupidity, not your strategic lying. -- Yet, if there is a person at each side that becomes suspicious of the manipulator, and if these two people can trust each other enough to meet and share their info (what each of them heard about
0WalterL10y
The wirehead solution applies to a lot more than psychopaths. Why would you consider it unfriendly?
6ChrisHallquist10y
When you say "agents" here, did you mean to say "psychologically normal humans"? Because the general claim I think Eliezer would reject, based on what he says on No Universally Compelling Arguments. But I do think he would accept the narrower claim about psychologically normal humans, or as he sometimes says "neurologically intact humans." And the argument for that is found in places like The Psychological Unity of Humankind, though I think there's an even better link for it somewhere - I seem to distinctly remember a post where he says something about how you should be very careful about attributing moral disagreements to fundamentally different values. EDIT: Here is the other highly relevant post I was thinking of.

Yeah, I meant to remain ambiguous about how wide Eliezer means to cast the net around agents. Maybe it's psychologically normal humans, maybe it's wider or narrower than that.

I suppose 'The psychological unity of humankind' is sort of an argument that value convergence is likely at least among humans, though it's more like a hand-wave. In response, I'd hand-wave toward Sobel (1999); Prinz (2007); Doring & Steinhoff (2009); Doring & Andersen (2009); Robinson (2009); Sotala (2010); Plunkett (2010); Plakias (2011); Egan (2012), all of which argue for pessimism about value convergence. Smith (1994) is the only philosophical work I know of that argues for optimism about value convergence, but there are probably others I just don't know about.

0Ishaan10y
Some of the sources you are hand waving towards are (quite rightly) pointing out that rational agents need not converge, but they aren't looking at the empirical question of whether humans, specifically, converge. Only a subset of those sources are actually talking about humans specifically. (^This isn't disagreement. I agree with your main suggestion that humans probably don't converge, although I do think they are at least describable by mono-modal distributions) I'm not sure it's even appropriate to use philosophy to answer this question. The philosophical problem here is "how do we apply idealized constructs like extrapolated preference and terminal values to flesh-and-blood animals?" Things like "should values which are not biologically ingrained count as terminal values?" and similar questions. ...and then, once we've developed constructs to the point that we're ready to talk about the extent to which humans specifically converge if at all, it becomes an empirical question..
3TheAncientGeek10y
No Universally Compelling Arguments has been put to me as a decisive refutation of Moral Realism, by somebody who thought the LW line was anti-realist. It isn't a decisive refutation because no (non strawman) realist thinks there are arguments that could compel an irrational person, an insane person, an very unintelligent person, and so on. Moral realists only need to argue that moral truths are independently discoverable by suitably motivated and equipped people, like mathematical truths (etc).
2Eugine_Nier10y
Well, "No Universally Compelling Arguments" also applies to physics, but it is generally believed that all sufficiently intelligent agents would agree on the laws of physics.
0[anonymous]10y
True, but physics is discoverable via the scientific method, and ultimately, in the nastiest possible limit, via war. If we disagree on physics, all we have to do is highlight the disagreement and go to war over it: whichever one of us is closer to right will succeed in killing the other guy (and potentially a hell of a lot of other stuff). Whereas if you try going to war over morality, everyone winds up dead and you've learned nothing, except possibly that almost everyone considers a Hobbesian war-of-all-against-all to be undesirable when it happens to him.
0TheAncientGeek10y
I think what he is talking about there is lack of disagreement in the sense of incommensurability, or orthogonality as it is locally known. Lack of disagreement int he sense of convergence or consensus is a very different thing.
1buybuydandavis10y
Hasn't been argued and seems quite implausible to me. I find moral realism meaningful for each individual (you can evaluate choices according to my values applied with infinite information and infinite resources to think), but I don't find it meaningful when applied to groups of people, all with their own values. EY finesses the point by talking about an abstract algorithm, and not clearly specifying what that algorithm actually implements, whether my values, yours, or some unspecified amalgamation of the values of different people. So that the point of moral subjectivism vs. moral universalism is left unspecified, to be filled in by the imagination of the reader.To my ear, sometimes it seems one way, and sometimes the other. My guess was that this was intentional, as clarifying the point wouldn't take much effort. The discussions of EY's metaethics always strike me as peculiar, as he's wandering about here somewhere while people discuss how they're unclear just what conclusion he had drawn.
0TheAncientGeek10y
I can how that could be implemented. However, I don't see how that would count as morality. It amounts to Anything Goes, or Do What Thou Wilt. I don't see how a world in which that kind of "moral realism" holds would differ from one where moral subjectivism holds, or nihilism for that matter. Where meaningful means implementable? Moral realism is not many things, and one of the things it is not is the claim that everyone gets to keep all their values and behaviour unaltered.
1buybuydandavis10y
See my previous coment on "Real Magic": http://lesswrong.com/lw/tv/excluding_the_supernatural/79ng If you choose not to count the actual moralities that people have as morality, that's up to you.
0[anonymous]10y
Not "anything goes, do what you will", so much as "all X go, X is such that we want X before we do it, we value doing X while we are doing it, and we retrospectively approve of X after doing it". We humans have future-focused, hypothetical-focused, present-focused, and past-focused motivations that don't always agree. CEV (and, to a great extent, moral rationality as a broader field) is about finding moral reasoning strategies and taking actions such that all those motivational systems will agree that we Did a Good Job. That said, being able to demonstrate that the set of Coherently Extrapolated Volitions exists is not a construction showing how to find members of that set.
1TheAncientGeek10y
As with a number of previous responses, that is ambiguous between the individual and the collective. If I could get some utility by killing you, then should I kill you? If the "we" above is interpreted individually, I should: if it is interpreted collectively, I shouldn't.
2[anonymous]10y
Yes, that is generally considered the core open problem of ethics, once you get past things like "how do we define value" and blah blah blah like that. How do I weigh one person's utility against another person's? Unless it's been solved and nobody told me, that's a Big Question.
-1TheAncientGeek10y
So...what's the point of CEV, hten?
2[anonymous]10y
It's a hell of a lot better than nothing, and it's entirely possible to solve those individual-weighting problems, possibly by looking at the social graph and at how humans affect each other. There ought to be some treatment of the issue that yields a reasonable collective outcome without totally suppressing or overriding individual volitions. Certainly, the first thing that comes to mind is that some human interactions are positive sum, some negative sum, some zero-sum. If you configure collective volition to always prefer mutually positive-sum outcomes over zero-sum over negative, then it's possible to start looking for (or creating) situations where sinister choices don't have to be made.
-4TheAncientGeek10y
Who said the alternative is nothing? Theres any number of theories of morality, and a further number of theories of friendly .ai.
-3Carinthium10y
Requesting lukeprog get round to this. Lesswrong Metaethics, given that it rejects a large amount of rubbish (coherentism being the main part), is the best in the field today and needs further advancing. Requesting people upvote this post if they agree with me that getting round to metaethics is the best thing Lukeprog could be doing with his time, and downvote if they disagree.
0Luke_A_Somers10y
Getting round to metaethics should rank on Lukeprog's priorities: [pollid:573]
2shminux10y
I would love to see Luke (the other Luke, but maybe you, too) and hopefully others (like Yvain) explicate their views on meta-ethics, given how the Eliezer's Sequence is at best unclear (though quite illuminating). It seems essential because a clear meta-ethics seems necessary to achieve MIRI's stated purpose: averting AGI x-risk by developing FAI.
0Carinthium10y
Creating a "balance Karma" post. Asking people use this for their conventional Karma for my above post, or to balance out upvotes/downvotes. This way my Karma will remain fair.

rigid designators

aside from a lot of arguing about definitions over whether Eliezer counts as a relativist.

I think these are in fact the whole story. Eliezer says loudly that he is a moral realist and not any sort of relativist, but his views amount to saying "Define good and bad and so forth in terms of what human beings, in fact, value; then, as a matter of objective fact, death and misery are bad and happiness and fun are good", which to many people sounds exactly like moral relativism plus terminological games; confusion ensues.

Rigid designators

The reason Eliezer's views are commonly mistaken for relativism in the manner you describe is because most people do not have a good grasp on the difference between sense and reference(a difference that, to be fair, doesn't seem to be well explained anywhere). To elaborate:

"Define good and bad and so forth in terms of what human beings, in fact, value" sounds like saying that goodness depends on human values. This is the definition you get if you say "let 'good' mean 'human values'". But the actual idea is meant to be more analogous to this: assuming for the sake of argument that humans value cake, define "good" to mean cake. Obviously, under that definition, "cake is always good regardless of what humans value" is true. In that case "good" is a rigid designator for cake.

The difference is that "good" and "human values" are not synonymous. But they refer to the same thing, when you fully dereference them, namely {happiness, fun and so forth}. This is the difference between sense and reference, and it's why it is necessary to understand rigid designators.

6Jack10y
This is an excellent description of the argument. Here is my question: Why bother with the middle man? No one can actually define good and everyone is constantly checking with 'human values' to see what it says! Assuming the universe runs on math and humans share attitudes about some things obviously there is some platonic entity which precisely describes human values (assuming there isn't too much contradiction) and can be called "good". But it doesn't seem especially parsimonious to reify that concept. Why add it to our ontology? It's just semantics in a sense: but there is a reason we don't multiply entities unnecessarily.
1nshepperd10y
Well, if you valued cake you'd want a way to talk about cake and efficiently distinguish cakes from non-cakes—-and especially with regards to planning, to distinguish plans that lead to cake from plans that do not. When you talk about cake there isn't really any reification of "the platonic form of cake" going on; "cake" is just a convenient word for a certain kind of confection. The motivation for humans having a word for goodness is the same.
2Jack10y
I don't necessarily have a problem with using the word "good" so long as everyone understands it isn't something out there in the world that we've discovered-- that it's a creation of our minds, words and behavior-- like cake. This is a problem because most of the world doesn't think that. A lot of times it doesn't seem like Less Wrong thinks that (but I'm beginning to think that is just non-standard terminology).
2TheOtherDave10y
Yeah, a lot of the Metaethics Sequence seems to be trying to get to this point. For my part, it seems easier to just stop using words like "good" if we believe they are likely to be misunderstood, rather than devoting a lot of energy to convincing everyone that they should mean something different by the word (or that the word really means something different from what they think it means, or whatever). I'm content to say that we value what we currently value, because we currently value it, and asking whether that's good or not is asking an empty question. Of course, I do understand the rhetorical value of getting to claim that our AI does good, rather than "merely" claiming that it implements what we currently value. .
-2TheAncientGeek10y
I am content to say the question is not empty, and if you assumptions lead you to suppose it is, then your assumptions need to be questioned.
0TheOtherDave10y
You seem to believe that I have arrived at my current position primarily via unquestioned assumptions. What makes you conclude that?
2gjm10y
Yes, sorry, I wasn't clear enough about that. No, let me go further; what I wrote was downright misleading. This is why I shouldn't write Less Wrong comments on a tablet where I am too strongly incentivized to make them brief :-). I endorse your description of Eliezer's position.
0TheAncientGeek10y
Why is cake a referent of good? And what happened to the normativity of Good? Why does it appear to make sense to wonder if we are valuing the right things, when Good is just whatever we value? ADDED: I don't see the S/R difference is relevant to relativism. If the referents of "good" vary with the mental contents of the person saying "good", that is relativism/subjectivism. (That the values referenced are ultimately physical does not affect that: relativism is an epistemological claim, not a metaphysical one).
1nshepperd10y
Why do we have words that mean things at all? For a start, the fact that some things seem to make sense is not a oracular window unto philosophical truth. Anything that we are unsure about will seem as if it could go either way, even if one of the options is in fact logically necessary or empirically true. That's the point of being unsure (example: the Riemann conjecture). At the object level, no-one knows in full detail exactly what they mean by "good", or the detailed contents of their own values. So trying to test "my values are good" by direct comparison, so to speak, is a highly nontrivial (read: impossible) exercise. Figuring out based on things like "wanting to do the right thing" that "good" and "human values" refer to the same thing while not being synonymous is another nontrivial exercise. To me, the fact that you don't understand is evidence the difference matters. Unless you're saying that "relativism" is just the statement that people on different planets speak different languages, in which case, "no shit" as the French say.
0TheAncientGeek10y
I was wondering how one knows what the referents of good are when one doesn't know the sense. I didn't claim that anything was anoracular window. But note that things you believe in, such as an external world, can just as glibly be dismissed as illusion.
1TheOtherDave10y
We are in the habit of (and reinforced for) asking certain questions about actual real-world things. "Is the food I'm eating good food?" "Is the wood I'm building my house out of good wood?" "Is the exercise program I'm starting a good exercise program?" Etc. In each case, we have some notion of what we mean by the question that grounds out in some notion of our values... that is, in what we want food, housing materials, and exercise programs to achieve. We continue to apply that habitual formula even in cases where we're not very clear what those values are, what we want those things to achieve. "Is democracy a good political system?" is a compelling-sounding question even for people who lack a clear understanding of what their political values are; "Is Christianity a good religion?" feels like a meaningful question to many people who don't have a clear notion of what they want a religion to achieve. That we continue to apply the same formula to get the question "Are the values I'm using good values?" should not surprise us; I would expect us to ask it and for it to feel meaningful whether it actually makes sense or not.
-2TheAncientGeek10y
You can argue that the things your theory can't explain are non-issues. I don't have to buy that,
1TheOtherDave10y
You certainly don't have to buy it, that's true. But when you ask a question and someone provides an answer you don't like, showing why that answer is wrong can sometimes be more effective than simply asserting that you don't buy it.
-2TheAncientGeek10y
The problem is a kind of quodlibet. Any inadequate theory can be made to work if one is allowed to dismiss whatever the theory can't explain.
0TheOtherDave10y
Sure, I agree. And any theory can be made to fail if I am allowed to demand that it explain things that don't actually exist. So it seems to matter whether the thing I'm dismissing exists or not. Regardless, all of this is a tangent from my point. You asked "Why does it appear to make sense to wonder if we are valuing the right things?" as a rhetorical question, as a way of arguing that it appears to make sense because it does make sense, because the question of whether our values are right is non-empty. My point is that this is not actually why it appears to make sense; it would appear to make sense even if the question of whether our values are right were empty. That is not proof that the question is empty, of course. All it demonstrates is that one of your arguments in defense of its non-emptiness is flawed. You will probably do better to accept that and marshall your remaining argument-soldiers to a victorious campaign on other fronts.
0TheAncientGeek10y
Non-emptiness is no more flawed than emptiness. The Open Question remains open.
0TheOtherDave10y
This is a nonsequitor. My claim was about a specific argument.
[-][anonymous]10y100

There was one aspect of that which made intuitive sense to me, but which now that I think about it may not have been adequately explained, ever. Eliezer's position seems to be that from some universal reference frame human beings would be viewed as moral relativists. However it is a serious mistake to think that such universal frames exist! So we shouldn't even try to think from a universal frame. From within the confines of a single, specific reference frame, the experience of morality is that of a realist.

EDIT: Put differently, I think Eliezer might agree that there is a metaphorical stone tablet with the rules of morality spelled out - it's encoded in the information patterns of the 3 lbs of grey matter inside your skull. Maybe Eliezer would say that he is a "subjective realist" or something like that. This is strictly different from moral relativism, where choice of morality is more or less arbitrary. As a subjective realist your morality is different than your pebblesorter friend, but it's not arbitrary. You have only limited control over the morality that evolution and culture gifted you.

8Jack10y
Philosophers just call this position "moral subjectivism". Moral realism is usually defined to exclude it. "Relativism" at this point should be tabooed since no one uses it in the technical sense and the popular sense includes a half dozen variations which are very different from one another to the extent they have been defined at all.
-2Douglas_Knight10y
Yes, he loudly says he's not a relativist, but he doesn't loudly talk about realism. If you ask him whether he's a moral realist, he'll say yes, but if you ask him for a self-description, he'll say cognitivist w̶h̶i̶c̶h̶ ̶i̶s̶ ̶o̶f̶t̶e̶n̶ ̶g̶r̶o̶u̶p̶e̶d̶ ̶a̶g̶a̶i̶n̶s̶t̶ ̶r̶e̶a̶l̶i̶s̶m̶. Moreover, if asked for detail, he'll say that he's an anti-realist. (though not all cognitivists are anti-realists) ---------------------------------------- Let me try that again: Eliezer loudly claims to be cognitivist. He quietly equivocates on realism. He also loudly claims not to be relativist, but practically everyone does.
5gjm10y
Is it? That seems backwards to me: non-cognitivism is one of the main varieties of non realism. (The other being error theory.) What am I missing?
-4Douglas_Knight10y
You're right, not all cognitivists are anti-realists. But some are, including Eliezer. Indeed, realists are generally considered cognitivist. But my impression is that if a moral system is labeled cognitivist, the implication is that it is anti-realist. That's because realism is usually the top level of classifying moral systems, so if you're bothering to talk about cognitivism, it's because the system is anti-realist.
3Jack10y
This is correct I think, but confusing. All realists are by definition cognitivists. Non-cognitivist is simply one variety of anti-realist: someone who thinks moral statements aren't the kinds of things that can have truth conditions at all. For example, someone who thinks they merely reflect the speakers emotional feelings about the matter (like loudly booing). Of the anti-realists there are two kinds of cognitivists: Moral error theorists who think that moral statements are about mind-independent facts but that there are no such facts And moral subjectivists who think that moral statements are about mind-dependent facts. If what you say is true, Eliezer is one of those (more or less).
0Douglas_Knight10y
Yes, people who say that realists are cognitivists say that this is true by definition, but I don't think these terms are used consistently enough that it is a good idea to argue by definition. In particular, I think Eliezer is right to equivocate on whether he is a realist. He certainly rejects the description of his morality as "mind-dependent."
0Jack10y
I'm not trying to argue by definition: I'm just telling you what the terms means as they are used in the metaethical literature (where they're used plenty consistently). If someone wants to say they are a moral realist but not a cognitivist then I have no idea what they are because they're not using standard terminology. If someone doesn't fit into the boxes created by the traditional terminology then come up with different labels. But it's an incredibly confusing and bad idea to use an unorthodox definition to classify yourself as something you're not. You representation makes me more confused about Eliezer's views. Why position him with this language if you aren't taking definitions from an encyclopedia? According to the standard groupings being an anti-realist cognitivist and objectivist would group someone with the error theorists. If Eliezer doesn't fit there then we can come up with a word to describe his position once it is precisely distinguished from the other positions.
2Douglas_Knight10y
Here's an example of inconsistency in philosophical use. I keep saying that Eliezer equivocates about whether he is a realist, and that I think he's right to do so. Elsewhere in the comments on this post you say that moral subjectivism is not realism by definition. But it's not clear to me from the Stanford Encyclopedia entry on moral realism that this is so. The entry on anti-realism says that Sayre-McCord explicitly puts moral subjectivism in moral realism. Since he wrote the article on realism, that explains why it seems to accept that possibility, but this it certainly demonstrates that this uncertainty is more mainstream than you allow.
0Jack10y
Uncertainty, even disagreement, about how to classify views is fine. It's not the same as inconsistency. Sayre-McCord's position on subjectivism is non-standard and treated as such. But I can still figure out what he thinks just from a single paragraph summarizing his position. He takes the standard definitions as a starting point and then makes an argument for his structure of theories. This is the sort of thing I'm asking you to do if you aren't going to use the standard terminology. You seem to be concerned with bashing philosophy instead of explaining your usage. I'm not the field's standard bearer. I just want to know what you mean by the words you're using! Stop equivocating about realism and just state the ways in which the position is realist and the ways in which it is anti-realist. Or how it is realist but you don't think realism should mean what people think it means.
1Douglas_Knight10y
I never used "realism," so there's no point in my defining it. Look back at this thread! My whole point was that Eliezer avoids the word. He thinks that cognitivism is a useful concept, so he uses it. Similarly, he avoids "moral subjectivism" and uses terms like "subjectively objective." He equivocates when asked for a label, endorsing both "realist" and "cognitivist anti-realist." But he does spell out the details, in tens of thousands of words across this sequence. Yes, if people want to pin down Eliezer's views they should say what parts are realist and what parts are anti-realist. When I object to people calling him realist or anti-realist, I'm certainly agreeing with that! ---------------------------------------- After that comment about "bashing philosophy," I don't think there's any point in responding to your first paragraph.
-2TheAncientGeek10y
I am one of a number of people who cannot detect a single coherent theory in his writings. A summary in the standard jargon would be helpful in persuading me that there is one.
-2Jack10y
... These quotes did not exactly express to me that you don't know to what extent his views or realist or anti-realist. I'm sorry if I was targeting you instead of Eliezer... but you were agreeing with his confusing equivocation. Ah yes, the old eschewing the well-recognized, well-explored terminology for an oxymoronic neologism. How could anyone get confused?
3Douglas_Knight10y
You sure you're not trying to force me to use jargon I don't like? I don't know what else to call responding to new jargon with sarcasm. At the very least, you seem to be demanding that we confuse laymen so that philosophers can understand. I happen to believe that philosophers won't understand, either. No, the right answer isn't to say "I don't know if he is a realist." Actually, I do think it would be better to reject the question of realism than to equivocate, but I suspect Eliezer has tried this and found that people don't accept it.
0Jack10y
As far as I can tell, no one understands. But I don't see how my suggestion, which involves reading maybe 2 encyclopedia articles to pick up jargon, would confuse laymen especially. Right, it's just you explicitly called him an anti-realist. And he apparently calls himself both? You can see how I could get confused. Do people accept equivocation? I'd be fine with rejecting the question of realism so long as it was accompanied by an explanation of how it was a wrong question. Just expressing my opinion re: design principles in the construction of jargon. I know I've been snippy with you, apologies, I haven't had enough sleep.
-2TheAncientGeek10y
The problem with the standard jargon is that "realism" is used to label a metaphysical and an epistemological claim. I like to call the epistemological claim, that there is a single set of moral truths, moral objectivism, which clearly is the opposite of moral subjectivism.
1Douglas_Knight10y
I simply don't believe you that philosophers use these words consistently. Philosophers have an extremely bad track record of asserting that they use words consistently.
3Jack10y
So, I think that is simply false regarding the analytic tradition, especially if we're comparing them to Less Wrong's use of specialized jargon (which is often hilariously ill-defined). I'd love to see some evidence for your claim. But that isn't the point. There are standard introductory reference texts which structure theories of ethical semantics. They contain definitions. They don't contradict each other. And all of them will tell you what I'm telling you. Let's look, here's wikipedia. Here is the SEP on Moral Realism. Here is the SEP on Moral Anti-Realism. Here is the entry on Moral Cognitivism. All three are written by different philosophers and all use nearly identical definitions which define the moral realist as necessarily being a cognitivist. The Internet Encyclopedia of Philosophy says the same thing. We're not talking about something that is ambiguous or borderline. Cognitivism is the first necessary feature of moral realism in the standard usage. If you are using the term "moral realist", but don't think cognitivism is part of the definition then no one can figure out what you're saying! Same goes for describing someone as an anti-realist who believes in cognitivism, that moral statments can be true and that they are mind-indpendent. All the terms after "anti-realist" in that sentence make up the entire definition of moral realism I'm not trying to be pedantic or force you to use jargon you don't like. But if you're going to use it, why not use the terms as they are used in easily available encyclopedia articles written by prominent philosophers? Or at least redefine the terms somewhere.
0Carinthium10y
Clarification- do you mean inconsistencies within or between philosophers? Between philosophers I agree with you- within a single philosopher's work I'd be curious to see examples.
2Douglas_Knight10y
I just mean that philosophers have a bad track record asserting that they are using the same definition as each other. That's rather worse than just not using the same definition. I told Jack that he wasn't using the same definition as the Stanford Encyclopedia. I didn't expect him to believe me, but he didn't even notice. Does that count for your purpose, since he chose the source? But, yes, I do condemn argument by definition because I don't trust the individuals to have definitions.
-2TheAncientGeek10y
Presumably a Platonist who thinks the Form of the Good is revealed by a mystical insight.
0Jack10y
A Platonist who thinks the Form of the Good is revealed by mystical insight is a cognitivist and I don't know why you would think otherwise. Wikipedia:) "Cognitivism is the meta-ethical view that ethical sentences express propositions and can therefore be true or false". Or you're not using standard terminology, in which case, see above.

I think my confusion is less about understanding the view (assuming the Richard's rigid designator interpretation is accurate) and more everyone's insistence on calling it a moral realist view. It feels like everyone is playing word games to avoid being moral subjectivists. I don't know if it was all the arguing with theists or being annoyed with moral relativist social-justice types but somewhere along the way much of the Less Wrong crowd developed strong negative associations with the words used to describe varieties of moral anti-realism.

As far as I can tell most everyone here has the same descriptive picture of what is going on with ethics. There is this animal on planet Earth that has semi-ordered preferences about how the world should be and how things similar to that animal should act. Those of this species which speak the language called "English" write inscriptions like "morality" and "right and wrong" to describe these preferences. These preferences are the result of evolved instincts and cultural norms. Many members of this species have very similar preferences.

This seems like a straightforward description of ethical subjectivism -- the p... (read more)

2Carinthium10y
A few nitpicks of your descriptive picture. 1- There are inevitable conflicts between practically any two creatures on this planet as to what preferences they would have as to the world. If you narrow these down to the area classified by humans as "moral" the picture can be greatly simplified, but there will still be a large amount of difference. 2- I dispute that moral sentences ARE about the attitudes of people. Most people throughout history have had a concept of "Right" and "Wrong" as being objective. This naive conception is philosophically indefensible, but the best descriptor of what people throughout history, and even nowadays, have believed. It is hard to defend the idea that a person thinks they are referring to X and are in fact referring to Y when X and Y are drastically different things and the person is not thinking of Y on any level of their brain- the likely case for, say, a typical Stone Age man arguing a moral point.
4Jack10y
Sure, as I said at the end, the "universality" of the whole thing is an open problem. That's fine. But in that case, all moral sentences are false (or nonsense, depending on how you feel about references to non-entities). I agree that there is a sense in which that is true which you outlined here. In this case we can start from scratch and just make the entire enterprise about figuring out what we we really truly want to do with the world-- and then do that. Personally I find that interpretation of moral language a bit uncharitable. And it turns out people are pretty stuck on the whole morality idea and don't like it when you tell them their moral beliefs are false. Subjectivism seems both more charitable and friendlier-- but ultimately these are two different ways of saying the same thing. The debates between varieties of anti-realism seem entirely semantic to me.
0Carinthium10y
1- Alright. Misunderstood. 2- There are some rare exceptions- some people define morality differently and can thus be said to mean different things. Almost all moral sentences, if every claim to something be right or wrong throughout history count as moral sentences, are false/nonsense, however. The principle of charity, however, does not apply here- the evidence clearly shows that human beings throughout history have truely believed that some things are morally wrong and some morally right on a level more than preferences, even if this is not in fact true.
0Jack10y
Philosophy typically involves taking folk notions that are important but untrue in a strict sense and constructing something tenable out of that material. And I think the situation is more ambiguous than you make it sound. But it is essentially irrelevant. I mean, you could just go back to bed after concluding all moral statements are false. But that seems like it is ignoring everything that made us interested in this question in the first place. Regardless of what people think they are referring to when they make moral statements it seems pretty clear what they're actually doing. And the latter is accurately described by something like subjectivism or quasi-realism. People might be wrong about moral claims, but what we want to know is why and what they're doing when they make them.
0Carinthium10y
A typical person would be insulted if you claimed that their moral statements referred only to feelings. Most philosophical definitions work on a principle which isn't quite like how ordinary people see them but would seem close enough to an ordinary person. There are a lot of uses of the concepts of right and wrong, not just people arguing with each other. Ethical dilemnas, people wondering whether to do the "right" thing or the "wrong" thing, philosophical schools (think of the Confucians, for example, who don't define 'right' or 'wrong' but talk about it a lot). Your conception only covers one use.
0ChrisHallquist10y
Except that's not Eliezer's view. The mistake you're making here is the equivalent of thinking that, because the meaning of the word "water" is determined by how English speakers use it, therefore sentences about water are sentences about the behavior of English speakers.
5Jack10y
I understand, this is what I'm dealing with in the second to last paragraph. There is a sense in which all concepts both exist subjectively and objectively. There is some mathematical function that describes all the things that ChrisHallquist thinks are funny just like there is a mathematical function that describes the behavior of atoms. We can get into the nitty-gritty about what makes a concept subjective and what makes a concept objective. But I don't see what the case for morality counting as "objective" is unless we're just going to count all concepts as objective.
0Leonhart10y
Can you be clearer about the way you are using "describes" here? I'm not clear if you are thinking about a) a giant lookup table of all the things Chris Hallquist finds funny, or b) a program that is more compact than that list - so compact, indeed, that a cut-down bug-filled beta of it can be implemented inside his skull! - but yet can generate the list.
0Jack10y
My point works with either, I think. Which is more charitable to Eliezer's position?
-4buybuydandavis10y
I am a moral subjectivist and a moral realist. The only point of contention I'd have with EY is if he is a non psycho human moral universalist. I felt that his language was ambiguous on that point, and at times he seemed to be making arguments in that direction. I just couldn't tell. But if\ we're going to be moral subjectivists, we should realize that "we ought" too easily glosses over the fact that ought_you is not identical to ought_me. And I don't think you get a compelling answer from some smuggled self recursion, but from your best estimate of what people actually are.
4Jack10y
Traditional usage defines those terms to exclude the possibility of being both. The standard definition of a moral realist is someone who believes that moral judgments express mind-independent facts; while the standard definition of a moral subjectivist is someone who believes moral judgments express mind-dependent facts. So I don't know quite what you mean. You mean someone who doesn't believe that there are moral universals among humans? One too many adjectives for me. If I understand this right: you're contrasting trying to come up with some self-justifying method for resolving disagreement (recursively finding consensus on how to find consensus) with... descriptive moral psychology? I'm not sure I follow.
0buybuydandavis10y
My point being that the categories themselves are not used consistently, so that I can be called either one or the other depending on usage. Definitions tend to be theory bound themselves, so that mind dependent and mind independent are not clear cut. If I think that eating cows is fine, but I wouldn't if I knew more and thought longer, which represents my mind - both, neither, the first, the second? For example, if you go to the article in La Wik on Ethical Subjectvism, they talk about "opinions" and not minds. In this case, my opinion would be that eating cows is fine, but it would not be my extrapolated values. Some would call my position realism, and some would call it subjectivism. Me, I don't care what you call it. I recognize that my position could be called either within the bounds of normal usage. Someone who believes that what is moral is universal across humans who are not psychos. I think you're getting the point there.
0TheAncientGeek10y
Does the fact that people have different opinions about non moral claims, means there are no objective, scientific facts?
0TheOtherDave10y
It doesn't mean that, no. But it does mean that I ought not behave as though objective, scientific facts exist until I have some grounds for doing so, and that "some people think their intuitions reflect objective, scientific facts" doesn't qualify as a ground for doing so. At this point, one could ask "well, OK, what qualifies as a ground for behaving as though objective, scientific facts exist?" and the conversation can progress in a vaguely sensible direction. I would similarly ask (popping your metaphorical stack) "what qualifies as a ground for behaving as though objective moral facts exist?" and refrain from behaving as though they do until some such ground is demonstrated.
0TheAncientGeek10y
I don't think you're in a position to do that unless you can actually solve the problem of grounding scientific objectivity without incurring Munchausen's trilemma. That is essentially an unsolved problem. Analytical philosophy, LW, and various other groups sidestep it by getting together with people who share the same intuitions. But that is not exactly the epistemic high ground.
0TheOtherDave10y
I'm content to ground behaving as though objective, scientific facts exist in the observation that such behavior reliably correlates with (and predicts) my experience of the world improving. I haven't observed anything analogous about behaving as though objective moral facts exist. This, too, is not the epistemic high ground. I'm OK with that. But, sure, if you insist on pulling yourself out of the Munchausen's swamp before you can make any further progress, then you're quite correct that progress is equally impossible on both scientific and ethical fronts.
0TheAncientGeek10y
Indeed you haven't, because they are not analogous. Morality is about guiding action in the world, not passively registering the state of the world. It doesn't tell you what the melting point of aluminum is, it tells you whether what you are about to do is the right thing. And if you think it is such levitation is unnecessary, then progress is equally possible on both fronts.
0TheOtherDave10y
Science isn't just about passively registering the state of the world, either.
0TheAncientGeek10y
Alice: "Science has a set of norms or guides-to-action called the scientific method. These have truth-values which are objective in the sense of not being a matter of individual whim" Bob: "I don't believe you! What experiments do you perform to measure these truth-values, what equipment do you use?" Charlie: "I don;'t believe you! You sound like you believe in some immaterial ScientificMethod object for these statements to correspond to!". ....welcome to my world.
0TheOtherDave10y
Dave: Behaving as though objective scientific facts exist has made it possible for me to talk to people all over the world, for the people I care about to be warm in the winter, cool in the summer, have potable water to drink and plenty of food to eat, and routinely survive incidents that would have killed us in pre-scientific cultures, and more generally has alleviated an enormous amount of potential suffering and enabled an enormous amount of value-satisfaction. I am therefore content to continue behaving as though objective scientific facts exist. If, hypothetically, it turned out that objective scientific facts didn't exist, but that behaving as though they do nevertheless reliably provided these benefits, I'd continue to endorse behaving as though they do. In that hypothetical scenario you and Alice and Bob and Charlie are free to go on talking about truth-values but I don't see why I should join you. Why should anyone care about truth in that hypothetical scenario? Similarly, if behaving as though objective moral facts exist has some benefit, then I might be convinced to behave as though objective moral facts exist. But if it's just more talking about truth-values divorced from even theoretical benefits... well, you're free to do that if you wish, but I don't see why I should join you.
0Lumifer10y
I can construct a very similar argument for Christianity (or for most any religion, actually). Usefulness of beliefs and verity of beliefs are not orthogonal but are not 100% correlated either.
0TheOtherDave10y
That's surprising, but if you can, please do. If behaving as though the beliefs of Christianity are objective facts reliably and differentially provides benefits on a par with the kinds of scientific beliefs we're discussing here, I am equally willing to endorse behaving as though the beliefs of Christianity are objective facts. Sure, I agree.
0Lumifer10y
The argument wouldn't involve running hot water in your house, but would involve things like social cohesion, shared values, psychological satisfaction, etc. Think about meme evolution and selection criteria. Religion is a very powerful meme that was strongly selected for. It certainly provided benefits for societies and individuals.
-2TheAncientGeek10y
Edith: A lot of good stuff, then? Fred: Those facts didn't fall off a tree, they were arrived at by following a true..right..effective..call it what you will...set of methods. Edith: You care about science because it leads to things that are good. Morality does too. Edith: you don't already? How do you stay out of jail? Edith: If there are no moral facts, then the good things you like are not really good at all.
0TheOtherDave10y
I'm not sure what you mean to express by that word. A lot of stuff I value, certainly. Yes, that's true. And? Great! Wonderful! I'll happily endorse morality on the grounds of its reliable observable benefits, then, and we can drop all this irrelevant talk about "objective moral facts". Same as everyone else... by following laws when I might be arrested for violating them. I would do all of that even if there were no objective moral facts. Indeed, I've been known to avoid getting arrested under laws that, if they did reflect objective moral facts, would seem to imply mutually exclusive sets of objective moral facts. Perhaps. So what? Why should I care? What difference does it make, in that scenario? For example, I prefer people not suffering to people suffering... that's a value of mine. If it turns out that there really are objective moral facts that are independent of my values, and that people suffering actually is objectively preferable to people not-suffering, and my values are simply objectively wrong... why should I care?
0TheAncientGeek10y
And there is a way of guides-to-action to be objectively right (etc) that has nothing to with reflecting facts or predicting experience. Thus removing the "morality doesn't help me predict experience" objection. You have presupposed that there are Good Things (benefits) in that comment, and in your previous comment about science. You are already attaching truth values to propositions about what is good or not, I don't have to argue you into that. "Jail is bad" has the truth-value True? Why are you avoiding jail if its badness is not a fact? Because you care about good things, benefits and so on. You are already caring about them, so I don't have to argue you into it. Do you update your other opinions if they turn out to be false?
1TheOtherDave10y
You are treating my statements about what I value as assertions about Good Things. If you consider those equivalent, then great... you are already treating Good as a fact about what we value, and I don't have to argue you into that. If you don't consider them equivalent (which I suspect) then interpreting the former as a statement about the latter is at best confused, and more likely dishonest. I value staying out of jail. Is there anything in your question I haven't agreed to by saying that? If not, great. I will go on talking about what I value, and if you insist on talking about the truth-values of moral claims I will understand you as referring to what you value. If so, what? Because I value staying out of jail. (Which in turn derives from other values of mine.) As above; if this is an honest and coherent response, then great, we agree that "good things" simply refers to what we value. Sure, there are areas in which I endorse doing this. So, you ask, shouldn't I endorse updating false moral beliefs as well? Sure, if I anticipate observable benefits to having true moral beliefs, as I do to having true beliefs in those other areas in which I have opinions. But I don't anticipate such benefits. Another area where I don't anticipate such benefits, and where I am similarly skeptical that the label "true beliefs" refers to anything or is worth talking about, is aesthetics. For example, sure, maybe my preference for blue over red is false, and a true aesthetic belief is that "red is more aesthetic than blue" is true. But... so what? Should I start preferring red over blue on that basis? Why on Earth would I do that? (But Dave, you value having accurate beliefs in other areas! Why not aesthetics?)
0TheAncientGeek10y
I am not sure what that means. Is the "we" individual-by-individual or collective? And where did you get the idea that Objective metaethics means giving up on values? How does that differ from "jail is bad-for-me"? If I thought that he truth-values of moral claims refers only to what I value, I wouldn't be making much of a pitch for objectivism, would I? Whatever that means? What explains the difference? But that isn't the function of moral beliefs: their function is to guide action. You have admitted that your behaviour is guided by jail-avoidance. You seem to be interested in the meta-level question of objective aesthetics. Why is that?
0TheOtherDave10y
I think that's a separate discussion, and I don't think spinning it off will be productive. Feel free to replace "we" with "I" if that's clearer. If it's still not clear what I mean, I'm content to let it drop there. I'm not sure what "giving up on values" means. Beats me. Perhaps it doesn't. No, you wouldn't. Yes. Whether concerning myself with the truth-values of the propositions expressed by opinions reliably provides observable and differential benefits. I agree that beliefs guide action (this is not just true of moral beliefs). If the sole function of moral beliefs is to guide action without reference to expected observable benefits, I don't see why I should prefer "true" moral beliefs (whatever that means) to "false" ones (whatever that means). Yes. Which sure sounds like a benefit to me. I don't seem that way to myself, actually. I bring it up as another example of an area where some people assert there are objective truths and falsehoods, but where I see no reason to posit any such thing...positing the existence of individual aesthetic values seems quite adequate to explain my observations.
0TheAncientGeek10y
I think it is a key issue. This is about ethical objectivism. If Good is a fact about what we value collectively, in your view, then your theory is along the lines of utilitariansim, which is near enough to objectivism AFAIC. Yet you seem to disagree with me about something. If you concern yourself with the truth values of your own beliefs about what you believe to be good and bad, and revise your beliefs accordingly and act on them, you will end up doing the right thing. What's more beneficial than doing the right thing? If the things you think are beneficial are in fact not beneficial, then you are not getting benefits; you just mistakenly think you are. To actually get benefits, you have to know what is actually beneficial. Morality is all about what is truly beneficial. Those truths aren't observable: neither are the truths of mathematics. Are you a passive observer who never acts?
1TheOtherDave10y
It is not clear to me what we disagree about, precisely, if anything. I don't know. It is not clear to me what the referent of "the right thing" is when you say it, or indeed if it even has a referent, so it's hard to be sure one way or another. (Yes, I do understand that you meant that as a rhetorical question whose correct answer was "Nothing.") Yes, that's true. No, that's false. But my expectation of actually getting benefits increases sharply if I know what is actually beneficial. I disagree. Supposing this is true, I don't see why it's relevant. No.
0TheAncientGeek10y
Is ethical objectivism true, IYO? Doing thins such that it is an objective fact that they are beneficial, and not just a possibly false belief. Explain how you observe the truth-value of a claim about what is beneficial. it is relevant you attitude that only the observable maters in epistemology. Then explaining your observations is not the only game in town.
0TheOtherDave10y
If you point me at a definition of ethical objectivism you consider adequate, I'll try to answer that question. So, you're asking what's more beneficial than doing things such that it's an objective fact that they are beneficial? Presumably doing other things such that it's an objective fact that they are more beneficial is more beneficial than merely doing things such that it's an objective fact that they are beneficial. When I experience X having consequences I value in situations where I didn't expect it to, I increase my confidence in the claim that X is beneficial. When I experience X failing to have such consequences in situations where I did expect it to, I decrease my confidence in the claim. How do unobservable mathematical truths matter in epistemology? That's true.
0TheAncientGeek10y
"moral claims have subject-independent truth values". And doing things that aren't really beneficial at all isn't really beneficial at all. Explain how you justified the truth of the claim "what Dave values is beneficial" Epistemology is about truth. So you no longer reject metaethics on the basis that it doesn't explain your observations?
0TheOtherDave10y
No. Yes, that's true. Increasing it has consequences I value. No, epistemology is about knowledge. For example, unknowable truths are not within the province of epistemology. If you point me to where in this discussion I rejected metaethics on the basis that it doesn't explain my observations, I will tell you if I still stand by that rejection. As it stands I don't know how to answer this question.
0TheAncientGeek10y
So you have beliefs that you have done beneficial things, but you don't know if you have, because you don't know what is beneficial, because you have never tried to find out, because you have assumed there is no answer to the question? That boils down to "what Dave values, Dave values". "Epistemic Logic: A Survey of the Logic of Knowledge" by Nicholas Rescher has a chapter on unknowable truth. But that is not the point. The point was unobservable truth. You seem to have decided, in line with your previous comments, that what is unobservable is unknowable. But logical and mathematical truths are well-known examples of unobservable (non empirical truths).
3TheOtherDave10y
That doesn't seem to follow from what we've said thus far. Absolutely. Which, IIRC, is what I said in the first place that inspired this whole conversation, so it certainly ought not surprise you that I'm saying it now. (shrug) All right. Let's assume for the sake of comity that you're right, that we can come to know moral truths about our existence through a process divorced from observation, just like, on your account, we come to know logical and mathematical truths about our existence through a process divorced from observation. So what are the correct grounds for deciding what is in the set of knowable unobserved objective moral truths? For example, consider the claim "angles between 85 and 95 degrees, other than 90 degrees, are bad." There are no observations (actual or anticipated) that would lead me to that conclusion, so I'm inclined to reject the claim on those grounds. But for the sake of comity I will set that standard aside, as you suggest. So... is that claim a knowable unobserved objective moral truth? A knowable unobserved objective moral falsehood? A moral claim whose unobserved objective truth-value is unknowable? A moral claim without an unobserved objective truth-value? Not a moral claim at all? Something else? How do you approach that question so as to avoid mistaking one of those other things for knowable unobserved objective moral truths?
-2TheAncientGeek10y
Have you a) seen outcomes which are beneficial, and which you know to be beneficial? or b) seen outcomes which you believe to be beneficial? AFAIC, this conversation is about your claim that ethical objectivism is false. That claim cannot be justified by a tautology like " "what Dave values, Dave values". It's being a special case of an overaching principle such as ""Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.", or "increase aggregate utility". How does it even relate to action? How does it even relate to action?
0TheOtherDave10y
I started all of this by saying: As far as I can tell, no such ground has been demonstrated throughout our whole discussion. So I continue to endorse not behaving as though objective moral facts exist. But as far as you're concerned, what we're discussing instead is whether I'm justified in claiming that ethical objectivism is false. (shrug) OK. I retract that claim. If that ends this discussion, I'm OK with that. I have seen outcomes that I'm confident are beneficial. I don't think the relationship of such confidence to knowledge or belief is a question you and I can profitably discuss. This just triggers regress. That is, OK, I'm evaluating moral claim X, for which I have no observed evidence, to see whether it's a knowable unobserved objective moral truth. To determine this, I first evaluate whether I can will that X should become a universal law. OK, fine... what are the correct grounds for deciding whether I can will that X be a universal law? But you additionally suggest that "increase aggregate utility" is the determiner here... which suggests that if X increases the aggregate utility of everything everywhere, I can will that X should become a universal law, and therefore can know that X is an objective moral truth. Yes? Have I understood your view correctly? Well, if angles between 85 and 95 degrees, other than 90 degrees, are bad, then it seems to follow that given a choice of angle between 85 and 95 degrees, I should choose 90 degrees. That sure sounds like a relationship to an action to me. So, to repeat my question, is "angles between 85 and 95 degrees, other than 90 degrees, are bad" a knowable unobserved objective moral truth, or not? By the standard you describe above, I should ask whether choosing 90 degrees rather than other angles between 85 and 95 degrees increases aggregate utility. If it does, then "angles between 85 and 95 degrees, other than 90 degrees, are bad" is an objective moral truth, otherwise it isn't. Yes? So, OK. How do I de
0TheAncientGeek10y
Confidence isn;t knowledge. So: b). You have only seen outcomes which you believe to be beneficial. Why not? If considering murder, you ask yourself whether you would want everyone to be able ot murder you, willy-nilly. Far from regressing, the answer to that grounds out in one of those kneejerk obvioulsy-not-valuable-to-Dave intuitions you have been appealing to throughout this discussion., Does your murdering someone increase aggregate utility? How does that affect other people? Choices that effect only yourself are aesthetics, not ethics.
0TheOtherDave10y
Tapping out here.
0[anonymous]10y
I'll address your example after you address mine.
0TheOtherDave10y
Actually, on further thought... by "moral claims have subject-independent truth values" do you mean "there exists at least one moral claim with a subject-independent truth value"? Or "All moral claims have subject-independent truth values"? I'm less confident regarding the falsehood of the former than the latter
0TheAncientGeek10y
The former.
0TheOtherDave10y
Fair enough. So, which moral claims have subject-independent truth values, on your account?
0TheAncientGeek10y
Mot of them. But there may be some claims that are self-reflexive, eg "to be the best person I can be, I should get a PhD".

I found it much clearer when I realised he was basically talking about rigid designation. It didn't help when EY started talking about rigid designation and using the terminology incorrectly.

Reference class: I studied academic philosophy.

It didn't help when EY started talking about rigid designation and using the terminology incorrectly.

I didn't notice that, can you elaborate?

5Larks10y
He seems to have thought Rigid Designation was about a magic connection between sound wave patterns and objects, such that the sound waves would always refer to the same object, rather than that those sound waves, when spoken by such a speaker in such a context, would always refer to the same object, regardless of which possible world that object was in. I'm sorry if that explanation was a little unclear; it was aimed at non-philosophers, but I suspect you could explain it better. EDIT: see also prior discussion
2komponisto10y
(In other words, he confused rigid designation with semantic externalism.)

Personally, I remain confused about his claim that morality is objective in some sense in The Bedrock of Morality: Arbitrary?, no matter how many times i reread it.

6passive_fist10y
I think it all boils down to this quote at the end (emphasis mine): I agree with you that this claim is confusing (I am confused about it as well). I don't think, however, that he's trying to justify that it's objective. He's merely stating what it is and deferring the justification to a later time.
5Viliam_Bur10y
Translated:
0nshepperd10y
But that's not what "better" means at all, any more than "sorting pebbles into prime heaps" means "doing whatever pebblesorters care about".
0Viliam_Bur10y
How specifically are these two things different? I can imagine some differences, but I am not sure which one did you mean. For example, if you meant that sorting pebbles is what they do, but it's not their terminal value and certainly not their only value (just like humans build houses, but building houses is not our terminal value), in that case you fight the hypothetical. If you meant that in a different universe pebblesorter-equivalents would evolve differently and wouldn't care about sorting pebbles into prime heaps, then the pebblesorter-equivalents wouldn't be pebblesorters. Analogically, there could be some human-equivalents in a paraller universe with inhuman values; but they wouldn't be humans. Or perhaps you meant the difference between extrapolated values and "what now feels like a reasonable heuristics". Or...
1nshepperd10y
What I meant is that "prime heaps" are not about pebblesorters. There are exactly zero pebblesorters in the definitions of "prime", "pebble" and "heap". If I told you to sort pebbles into prime heaps, the first thing you'd do is calculate some prime numbers. If I told you to do whatever pebblesorters care about, the first thing you'd do is find one and interrogate it to find out what they valued.
1Viliam_Bur10y
If I gave you a source code of a Friendly AI, all you'd have to do would be to run the code. If I told you to do whatever human CEV is, you'd have to find and interrogate some humans. The difference is that by analysing the code of the Friendly AI you could probably learn some facts about humans, while by learning about prime numbers you don't learn about the pebblesorters. But that's a consequence of humans caring about humans, and pebblesorters not caring about pebblesorters. Our values are more complex than prime numbers and include caring about ourselves... which is probably likely to happen to a species created by evolution.
6Vaniver10y
I think he means that if the pebblesorters came along, and studied humanity, they would come up with a narrow cluster which they would label "h-right" instead of their "p-right", and that the cluster h-right is accessible to all scientifically-minded observers. It's objective in the sense that "the number of exterior columns in the design of the Parthenon" is objective, but not in the sense that "15*2+8*2" is objective. The first is 46, but could have been something else in another universe; the second is 46, and can't be something else in another universe. But... it looks like he's implying that "h-right" is special among "right"s in that it can't be something else in another universe, but that looks wrong for simple reasons. It's also not obvious to me that h-right is a narrow cluster.
6[anonymous]10y
It's because you're a human. You can't divorce yourself from being human while thinking about morality.
2Vaniver10y
It's not clear to me that the first of those statements implies the second of those statements. As far as I can tell, I can divorce myself from being human while thinking about morality. Is there some sort of empirical test we can do to determine whether or not that's correct?
1Viliam_Bur10y
Seems to me that if you weren't human, you wouldn't care about morality (and instead care about paperclips or whatever). So even if you try to imagine yourself as some kind of neutral disembodied mind, the fact that this mind is interested in morality (instead of paperclips) shows that it's a human in disguise. Otherwise it would be very difficult to locate morality in the vast set of "things a mind could consider valuable", so there is almost zero probability that the neutral disembodied mind would spend even a few seconds thinking about it.
1Vaniver10y
If you take "morality" to be "my peculiar preference for the letter v," but it seems to me that a more natural meaning of "morality" is "things other people should do." Any agent which interacts with other agents has both a vested stake in how windfalls are distributed and in the process used to determine how windfalls are distributed, and so I'd like to talk about "fair" in a way that paperclippers, pebblesorters, and humans find interesting. That is, how is it difficult to think about "my particular value system," "value systems in general," "my particular protocols for interaction," and "protocols for interaction in general" as different things? Why, when Eliezer is so quick to taboo words and get to the heart of things in other areas, does he not do so here? But when modelling a paperclipper, the neutral disembodied mind isn't interested in human morality, and is interested in paperclips, and thinks of desire for paperclips as the universal impulse. That is to say, I think I have more control over my interests than this thought experiment is presuming.
1nshepperd10y
You've passed the recursive buck here.
2Vaniver10y
Sort of? I'm not trying to explain morality, but label it, and I think that the word "should" makes a decent label for the cluster of things which make up the "morality" I was trying to point to. The other version I came up with was like thirty words long, and I figured that 'should' was a better choice than that.
0TheAncientGeek10y
I dare say that a disembodied, solipsistic mind wouldn't need to think much about morality. But an embodied mind, in a society, competing for resources with other agents, interacting with them in painful and pleasant ways would need something morality-like, some way of regulating interactions and assigning resources. "Social" isn't some tiny speck in mindspace, it's a large chunk.
0Carinthium10y
It's true that he can't divorce himself from human in a sense, but a few nitpicks. 1- In theory (although probably not in practice), Vaniver could imagine himself as another sort of hypothetically or actually possible moral being. Apes have morality, for example. You could counter with Elizier's definition of morality here, but his case for moral convergence is fairly poor. 2- Even a completely amoral being can "think about morality" in the sense of attempting to predict human actions and taking moral codes into account. 3- I know this is very pedantic, but I would contend there are possible universes in which the phrase "You can't divorce yourself from being human while thinking about morality" does not apply. An Aristotelean universe in which creatures have purposes and inherently gain satisfication from fullfilling their purpose would use an Aristotelean metaethics of purpose-fullfilment, and a Christian universe a metaethics of the Will of God- both would apply.
0[anonymous]10y
No, there's not, which is rather the point. It's like asking "what would it be like to move faster than the speed of light?" The very question is silly, and the results of taking it seriously aren't going to be any less silly.
3Vaniver10y
I still don't think I'm understanding you. I can imagine a wide variety of ways in which it could be possible to move more quickly than c, and a number of empirical results of the universe being those ways, and tests have shown that this universe does not behave in any of those ways. (If you're trying to demonstrate a principle by example, I would prefer you discuss the principle explicitly.)
3Error10y
Datapoint: I didn't find Metaethics all that confusing, although I am not sure I agree with it. I had this impression too, and have more or less the same sort-of-objection to it. I say "sort of" because I don't find "h-right as a narrow cluster" obvious, but I don't find it obviously wrong either. It feels like it should be a testable question but I'm not sure how one would go about testing it, given how crap humans are at self-reporting their values and beliefs. On edit: Even if h-right isn't a narrow cluster, I don't think it would make the argument inconsistent; it could still work if different parts of humanity have genuinely different values modeled as, say, h1-right , h2-right, etc. At that point I'm not sure the theory would be all that useful, though.
0Vaniver10y
I think part of the issue is that "narrow" might not have an obvious reference point. But it seems to me that there is a natural one: a single decision-making agent. That is, one might say "it's narrow because the moral sense of all humans that have ever lived occupies a dot of measure 0 in the total space of all possible moral senses," but that seems far less relevant to me than the question of if the intersection of those moral senses is large enough to create a meaningful agent. (Most likely there's a more interesting aggregation procedure than intersection.) I do think that it makes the part of it that wants to drop the "h" prefix, and just talk about "right", useless. As well, my (limited!) understanding of Eliezer's broader position is that there is a particular cluster, which I'll call h0-right, which is an attractor- the "if we knew more, thought faster, were more the people we wished we were, had grown up farther together" cluster- such that we can see h2-right leads to h1-right leads to h0-right, and h-2-right leads to h-1-right leads to h0-right, and h2i-right leads to hi-right leads to h0-right, and so on. If such a cluster does exist, then it makes sense to identify it as a special cluster. Again, it's non-obvious to me that such a cluster exists, and I haven't read enough of the CEV paper / other work to see how this is reconciled with the orthogonality thesis, and it appears that word doesn't appear in the 2004 writeup.

aside from a lot of arguing about definitions over whether Eliezer counts as a relativist

I think the whole point was to taboo "realist" and "relativist." So if people come out of the sequence arguing about those definitions, they don't seem to have gotten anything out of the sequence. So, yes, aside from everything, there's no other problem. But that doesn't help you narrow down the problem. I suspect this is either strong agreement or strong disagreement with gjm, but I don't know which.

It didn't seem terribly compelling to me, but whether that was a failure of understanding or not I can't really say.

For my own part, I'm perfectly content to say that we care about what we (currently) care about because we care about it, so all of this "moral miracle" stuff about how what we (currently) care about really is special seems unnecessary. I can sort of understand why it's valuable rhetorically when engaging with people who really want some kind of real true specialness in their values, but I mostly think such people should get ove... (read more)

-2Eugine_Nier10y
It is equally correct to say we believe what we believe, that doesn't make our beliefs true.
2TheOtherDave10y
Yes: valuing something implies that I value it, and believing something doesn't imply that it's true. Agreed. I assume you're trying to imply that there exists some X that bears the same kind of relationship to valuing that truth has to belief, and that I'm making an analogous error by ignoring X and just talking about value as if I ignored truth and just talked about belief. Then again, maybe not. You seem fond of making these sorts of gnomic statements and leaving it to others to unpack your meaning. I'm not really sure why. Anyway, if that is your point and you feel like talking about what you think the X I'm illegitimately ignoring is, or if your point is something else and you feel like actually articulating it, I'm listening.
0Eugine_Nier10y
Well, the common name for this X is something being "moral" or "right" but it appears a lot of people in this thread like to use those words in non-standard ways.
1TheOtherDave10y
If you mean what I think you mean, then I agree... I'm disregarding the commonly-referenced "morality" or "rightness" of acts that somehow exists independent of the values that various value-having systems have. If it turns out that such a thing is important, then I'm importantly mistaken. Do you believe such a thing is important? If so, why?
0TheAncientGeek10y
I think that is a distinct possibility. What's more important? What would serve as a good excuse for doing immoral things, or not knowing right from wrong?
2TheOtherDave10y
The lack of anything depending on whether an act was immoral; the lack of any consequences to not knowing right from wrong.
0TheAncientGeek10y
Firstly, you are assuming something that many would disagree with: that an act with no consequences can be immoral, rather than being automatically morally neutral. Secondly: even if true, that is a special case. The importance of morality flows from its obligatoriness.
0TheOtherDave10y
Sure. You asked a very open-ended question, I made some assumptions about what you meant. If you'd prefer to clarify your own meaning instead, I'd be delighted, but that doesn't appear to be your style.
0TheAncientGeek10y
The intended answer to "what is more important than morality", AKA "what is a good excuse for behaving immorally" was "nothing" (for all that you came up with ... nothing much). The question was intended to show that not only is morality important, it is ultimately so.
0TheOtherDave10y
Thanks for clarifying.

Why didn't people (apparently?) understand the metaethics sequence?

Perhaps back up a little. Does the metaethics sequence make sense? As I remember it, a fair bit of it was a long, rambling and esoteric bunch of special pleading - frequently working from premises that I didn't share.

5ChrisHallquist10y
Long and rambling? Sure. But then so is much else in the sequences, including the quantum mechanics sequence. As for arguing from premises you don't share, what would those premises be? It's a sincere question, and knowing your answer would be helpful for writing my own post(s) on metaethics.
0byrnema10y
I recall not being able to identify with the premises... some of them were really quite significant. I now recall, it was with "The Moral Void, in which apparently I had different answers than expected. "Would you kill babies if it was inherently the right thing to do?" The post did discuss morality on/off switches later in the context of religion, as an argument against (wishing for / wanting to find) universally compelling arguments. The post doesn't work for me because it seems there is an argument against the value of universally compelling arguments with the implicit assumption that since universally compelling argument don't exist, any universally compelling argument would be false. I happen to (mostly) agree that there aren't universally compelling arguments, but I still wish there were. The metaethics sequence failed to talk me out of valuing this. Also, there were some particular examples that didn't work for me, since I didn't have a spontaneous 'ugh' field around some of the things that were supposed to be bad. I see Jack expressed this concept here: For whatever reason, I feel like my morality changes under counterfactuals.
1ChrisHallquist10y
But you realize that Eliezer is arguing that there aren't universally compelling arguments in any domain, including mathematics or science? So if that doesn't threaten the objectivity of mathematics or science, why should that threaten the objectivity of morality? Can you elaborate?
1byrnema10y
Waah? Of course there are universally compelling arguments in math and science. (Can you elaborate?) It is easy for me to think of scenarios where any particular behavior might be moral. So that if someone asks me, "imagine that it is the inherently right thing to kill babies, " it seems rather immediate to answer that in that case, killing babies would be inherently right. This is also part of the second problem, where there aren't so many things I consider inherently wrong or right ... I don't seem to have the same ugh fields as the intended audience. (One thing which seems inherently right to me is that there would be an objective morality, it just happens to be apparently false in this universe, for now.)
0hairyfigment10y
Of course there aren't. You can trivially imagine programming a computer to print, "2+2=5" and no verbal argument will persuade it to give the correct answer - this is basically Eliezer's example! He also says that, in principle, an argument might persuade all the people we care about. While his point about evolution and 'psychological unity' seems less clear than I remembered, he does explicitly say elsewhere that moral arguments have a point. You should assign a high prior probability to a given human sharing enough of your values to make argument worthwhile (assuming various optimistic points about argumentation in general with this person). As for me, I do think that moral questions which once provoked actual war can be settled for nearly all humans. I think logic and evidence play a major part in this. I also think it wouldn't take much of either to get nearly all humans to endorse, eg, the survival of humanity - if you think that part's unimportant, you may be forgetting Eliezer's goal (and in the abstract, you may be thinking of a narrower range of possible minds). How could it be true, aside from a stronger version of the previous paragraph? I don't know if I understand what you want.
1TheAncientGeek10y
You can't persuade rocks either. Don't you think this might be just a wee bit of a strawman of the views of people who believe in universally compelling arguments?
0ChrisHallquist10y
Okay... I need to write a post about that. Are you really imagining a coherent possibility, though? I mean, you could also say, "If someone tells me, 'imagine that p & ~p,' it seems that in that case, p & ~p."
1byrnema10y
I am. It's so easy to do I can't begin to guess what the inferential distance is. Wouldn't it be inherently right to kill babies if they were going to suffer? Wouldn't it be inherently right to kill babies if they had negative moral value to me, such as baby mosquitoes carrying malaria?
3TheOtherDave10y
I think it's fair, principle of charity and all, to assume "babies" means "baby humans" specifically. A lot of things people say about babies becomes at best false, at worst profoundly incoherent, without this assumption. But you're right of course, that there are many scenarios in which killing human babies leads to better solutions than not killing them. Every time I consider pointing this out when this question comes up, I decide that the phrase "inherently right" is trying to do some extra work here that somehow or other excludes these cases, though I can't really figure out how it is supposed to do that work, and it never seems likely that raising the question will get satisfying answers. This seems like it might get back to the "terminal"/"instrumental" gulf, which is where I often part company with LW's thinking about values.
0byrnema10y
Yeah, these were just a couple examples. (I can also imagine feeling about babies the way I feel about mosquitos with malaria. Do I have an exceptionally good imagination? As the imagined feelings become more removed from reality, the examples must get more bizarre, but that is the way with counter-factuals.) But there being ready examples isn't the point. I am asked to consider that I have this value, and I can, there is no inherent contradiction. Perhaps as you suggest, there is no p&-p contradiction because preserving the lives of babies is not a terminal value. And I should replace this example with an actual terminal value. But herein lies a problem. Without objective morality, I'm pretty sure I don't have any terminal values -- everything depends on context. (I'm also not very certain what a terminal value would like if there was an objective morality.)
-2Carinthium10y
Could you clarify a bit? I'd be curious to hear your ethical views myself, particularly your metaethical views. I was convinced of some things by the Metaethics sequence (it convinced me that despite the is-ought distinction ethics could still exist), but I may have made a mistake so I want to know what you think.
0timtyler10y
That's an open-ended question which I don't have many existing public resources to address - but thanks for your interest. Very briefly: * I like evolution, Yukdowsky seems to dislike it. Ethically, Yukdowsky is an intellectual descendant of Huxley, while I see myself as thinking more along the lines of Kropotkin. * Yukdowsky seems to like evolutionary psychology. So far evolutionary psychology has only really looked at human universals. To take understanding of the mind further, it is necessary to move to a framework of gene-meme coevolution. Evolutionary psychology is politically correct - through not examining huamn differences - but is scientifically very limited in what it can say, because of the significance of cultural transmission on human behaviour. * Yudkowsky likes utilitarianism. I view utilitarianism largely as a pretty unrealistic ethical philosophy adopted by ethical philosophers for signalling reasons. * Yukdowsky is an ethical philosopher - and seems to be on a mission to persuade people that giving control a machine that aggregates their preferences will be OK. I don't have a similar axe to grind.

It's been a while since I read (part of) the metaethics sequence. With that said:

I have a pretty strong aversion to the word "right" used in discourse. The word is used to mean a few different things, and people often fail to define their use of it sufficiently for me to understand what they're talking about. I don't remember being able to tell whether Eliezer was attempting to make a genuine argument for moral-realism; when he introduced the seemingly sensical term h-right (recognizing that things humans often feel are "right" are simp... (read more)

[-][anonymous]10y30

Let's get some data (vote accordingly):

Did you understand the metaethics sequence, when you read it?

[pollid:572]

How do you know if you understood it? Is there a set of problems to test your understanding?

5Vaniver10y
Agreed this is a good idea to prevent illusion of transparency.
1somervta10y
I reaaally wish that were true.
7ChrisHallquist10y
I approve of having a poll, but isn't there a better way to do polls in the LW software?
5Vaniver10y
Yes; if you click the "Show Help" button below the bottom right of the comment box, and then click the Polls Help link, you will find details about how to code polls.
4[anonymous]10y
Cool, thanks.
2ChrisHallquist10y
Oh wow, this is very different from what I would've expected, based on the way people talk about the metaethics sequence. Guesses as to whether this is a representative sample? In retrospect, I should've considered the possibility that "people don't understand the metaethics sequence!" was reflective of a loud minority... on the other hand, can anyone think of reasons why this poll might be skewed towards people who understood the metaethics sequence?
3Moss_Piglet10y
Because a large subset of people who don't understand things are unaware of their misunderstanding?
3Douglas_Knight10y
Chris is surprised because he saw a lot of people saying that they themselves did not understand the sequence.
-2TheAncientGeek10y
Several people have tried to explain Lesswrongian metaethics to me, only to give up in confusion. Being able to explain something is the acid test of understanding it.
0Carinthium10y
I voted for No, defined by when I first read it.
0[anonymous]10y
Yes, I think I understood it at the time.
-3[anonymous]10y
No, I did not understand it or had significant trouble.
-4[anonymous]10y
(karma balance)

If you decide to write that post, it would be great if you started by describing the potential impact of metaethics on FAI design, to make sure that we're answering questions that need answering and aren't just confusions about words. If anyone wants to take a stab here in the comments, I'd be very interested.

1TheOtherDave10y
Well... according to the SEP, metaethics encompasses the attempt to understand the presuppositions and commitments of moral practice. If I'm trying to engineer a system that behaves morally (which is what FAI design is, right?), it makes some sense that I'd want to understand that stuff, just as if I'm trying to engineer a system that excavates tunnels I'd want to understand the presuppositions and commitments of tunnel excavation. That said, from what I've seen it's not clear to me that the actual work that's been done in this area (e.g., in the Metaethics Sequence) actually serves any purpose other than rhetorical.
0V_V10y
I think that framing the issue of AI safety in terms of "morality" or "friendliness" is a form of misleading anthropomorphization. Morality and friendliness are specific traits of human psychology which won't necessarily generalize well to artificial agents (even attempts to generalize them to non-human animals are often far-fetched). I think that AI safety would be probably best dealt with in the framework of safety engineering.
1TheOtherDave10y
All right. I certainly agree with you that talking about "morality" or "friendliness" without additional clarifications leads most people to conclusions that have very little to do with safe AI design. Then again, if we're talking about self-improving AIs with superhuman intelligence (as many people on this site are) I think the same is true of talking about "safety."

Should we expect metaethics to affect normative ethics? Should people who care about behaving morally, therefore care about metaethics at all?

Put another way — Assume that there is a true, cognitivist, non-nihilist, metaethical theory M. (That is, M asserts that there exists at least one true moral judgment.) Do we expect that people who know or believe M will act more morally, or even have more accurate normative-ethical beliefs, than people who do not?

It's conceivable for metaethics to not affect normative ethics — by analogy to the metaphysics of mathem... (read more)

5Douglas_Knight10y
a tangential response on mathematics Today there is little disagreement over inference, but a century ago there was a well-known conflict over the axiom of choice and a less known conflict over propositional logic. I've never been clear on the philosophy of intuitionism, but it was the driving force behind constructive mathematics. And it is pretty clear that Platonism demands proof by contradiction. As for axioms of set theory, Platonists debate which axioms to add, while formalists say that undecidability is the end of the story. Platonists pretty consistently approve higher cardinal axioms, but I don't know that there's a good reason for their agreement. They certainly disagree about the continuum hypothesis. That's just Platonic set theorists. Mainstream mathematicians tend to (1) have less pronounced philosophy and (2) not care about higher cardinals, even if they are Platonists (but perhaps only because they haven't studied set theory). Bourbaki and Grothendieck used higher cardinals in mainstream work, but lately there has been a turn to standardizing on ZFC. Going back to the more fundamental issue of constructive math: many years ago, I heard a talk by a mathematician who looked into formal proof checkers. They came out of CS departments and he was surprised to find that they were all constructivist. I'm not sure this reflects a philosophical difference between math and CS, rather than minimalism or planned application to the Curry-Howard correspondence.
1Carinthium10y
If we take for granted that there is a true metaethical theory, then it depends on what that metaethical theory says. Unlike Elizier, I would argue that there are plenty of possible metaethical theories that would at least arguably override subjective opinion. Two examples are the Will of God metaethical theory (if an omnipotent God existed) or the Purpose theory (which states that although humans are free-willed, some actions do or do not contribute to achieving a human's natural purpose in life. Said purpose is meant to be coherent, unlike evolutionary purpose- so better achievement would lead to achieving satisfaction in the long run). These are debatable, but make ethics one way or another more than mere human opinion. Without any rational evidence moral nihilism cannot be considered refuted. Under Elizier's theory, moral nihilism is refuted in a sense- but without a rational argument to oppose it, the metaethicist has no answer. I was a moral nihilist until I read and understood the Sequences, for example. Finally, metaethics is useful in one particular scenario- the ethical dilemna. When there is a conflict between two desires both which feel like they have some claim to moral rightness, correct metaethics is essential to sort out what best to do. None of this helps with acting more selflessly and less selfishly, or deciding to do what is right againt selfish instincts. However, that's not what it needs to do.
0TheOtherDave10y
If I understand you correctly, your claim that if this turns out to be true, then I ought to perform those acts which contribute to achieving my natural purpose, whether I net-value satisfaction or not. Yes? Is it? It seems like object-level ethics achieves this purpose perfectly well. If it returns the result that they are equally good to do, then the correct thing to do is pick one. What do I need metaethics for, here?
0Carinthium10y
The probability of that theory in reality is very, very low- it is a hypothetical universe. However, given that human beings have a tendency to define ethics in an Objective light in such a universe it would make sense to call it "objective ethics". Admittedly I assume you value satisfaction here, but my argument is about what to call moral behaviour more than what you 'should' do. Assuming Eliezer's metaethics is actually true, you have a very good point. Eliezer, however, might argue that it is necessary to avoid becoming a 'morality pump'- doing a series of actions which feel right but which have effects in the world that cancel each other out or end up at a clear loss. However, there are other plausible theories. One possible theory (similiar to one I once held but which I'm not sure about now) would say that you need to think through the implications of both courses of action and how you would feel about the results as best as you can so you don't regret your decision. In addition, you should at least concede that your theory only works in this universe, not in some possible universes. It really depends upon the assumption that Eliezer's metaethics or something similiar to is the true metaethics.
0TheOtherDave10y
I apologize, but after reading this a few times I don't really understand what you're saying here, not even approximately enough to ask clarifying questions. It's probably best to drop the thread here.

Reading the comments on the metaethics sequence, though, hasn't enlightened me about what exactly people had a problem with, aside from a lot of arguing about definitions over whether Eliezer counts as a relativist.

Since you (apparently) understand him., Chris, maybe you could settle the matter.

I didn't understand it the first time, probably because I hadn't yet fully absorbed A Human's Guide to Words, which it mimics heavily.

Never heard of rigid designators, the metaethics sequence made perfect sense to me.