Eliezer_Yudkowsky comments on What is Eliezer Yudkowsky's meta-ethical theory? - Less Wrong

33 Post author: lukeprog 29 January 2011 07:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (368)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 30 January 2011 02:49:14AM 20 points [-]

The closest point I've found to my metaethics in standard philosophy was called "moral functionalism" or "analytical descriptivism".

Cognitivism: Yes, moral propositions have truth-value, but not all people are talking about the same facts when they use words like "should", thus creating the illusion of disagreement.

Motivation: You're constructed so that you find some particular set of logical facts and physical facts impel you to action, and these facts are what you are talking about when you are talking about morality: for example, faced with the problem of dividing a pie among 3 people who all worked equally to obtain it and are all equally hungry, you find the mathematical fact that 1/3, 1/3, 1/3 is an equal division compelling - and more generally you name the compelling logical facts associated with this issue as "fairness", for example.

(Or as it was written in Harry Potter and the Methods of Rationality:

"Mr. Potter, in the end people all do what they want to do. Sometimes people give names like 'right' to things they want to do, but how could we possibly act on anything but our own desires?"

"Well, obviously I couldn't act on moral considerations if they lacked the power to move me. But that doesn't mean my wanting to hurt those Slytherins has the power to move me more than moral considerations!")

Moral epistemology: Statements can be true only when there is something they are about which makes them true, something that fits into the Tarskian schema "'X' is true iff X". I know of only two sorts of bearers of truth-value, two sorts of things that sentences can be about: physical facts (chains of cause and effect; physical reality is made out of causes a la Judea Pearl) and logical validities (which conclusions follow from which premises). Moral facts are a mixture of both; if you throw mud on a painting it becomes physically less beautiful, but for a fixed painting its "beauty" is a logical fact, the result of running the logical "beauty" function on it.

Comment author: lukeprog 30 January 2011 04:59:42AM *  7 points [-]

Eliezer,

Thanks for your reply! Hopefully you'll have time to answer a few questions...

  1. Can anything besides Gary's preferences provide a justification for saying that "Gary should_gary X"? (My own answer would be "No.")

  2. By saying "Gary should_gary X", do you mean that "Gary would X if Gary was fully informed and had reached a state of reflective equilibrium with regard to terminal values, moral arguments, and what Gary considers to be a moral argument"? (This makes should-statements "subjectively objective" even if they are computationally intractable, and seems to capture what you're saying in the paragraph here that begins "But the key notion is the idea that...")

  3. Or, perhaps you are saying that one cannot give a concise definition of "should," as Larry D'Anna interprets you to be saying?

Comment author: Eliezer_Yudkowsky 30 January 2011 04:06:19PM 15 points [-]

Can anything besides Gary's preferences provide a justification for saying that "Gary should_gary X"? (My own answer would be "No.")

This strikes me as an ill-formed question for reasons I tried to get at in No License To Be Human. When Gary asks "What is right?" he is asking the question e.g. "What state of affairs will help people have more fun?" and not "What state of affairs will match up with the current preferences of Gary's brain?" and the proof of this is that if you offer Gary a pill to change his preferences, Gary won't take it because this won't change what is right. Gary's preferences are about things like fairness, not about Gary's preferences. Asking what justifies shouldGary to Gary is either answered by having shouldGary wrap around and judge itself ("Why, yes, it does seem better to care about fairness than about one's own desires") or else is a malformed question implying that there is some floating detachable ontologically basic property of rightness, apart from particular right things, which could be ripped loose of happiness and applied to pain instead and make it good to do evil.

By saying "Gary should_gary X", do you mean

Shouldness does incorporate a concept of reflective equilibrium (people recognize apparent changes in their own preferences as cases of being "mistaken"), but should_Gary makes no mention of Gary (except insofar as Gary's welfare is one of Gary's terminal values) but instead is about a large logical function which explicitly mentions things like fairness and beauty. This large function is rightness which is why Gary knows that you can't change what is right by messing with Gary's brain structures or making Gary want to do something else.

Or, perhaps you are saying that one cannot give a concise definition of "should"

You can arrive at a concise metaethical understanding of what sort of thing shouldness is. It is not possible to concisely write out the large function that any particular human refers to by "should", which is why all attempts at definition seem to fall short; and since for any particular definition it always seems like "should" is detachable from that definition, this reinforces the false impression that "should" is an undefinable extra supernatural property a la Moore's Open Question.

By far the hardest part of naturalistic metaethics is getting people to realize that it changes absolutely nothing about morals or emotions, just like the fact of a deterministic physical universe never had any implications for the freeness of our will to begin with.

I also note that although morality is certainly not written down anywhere in the universe except human brains, what is written is not about human brains, it is about things like fairness; nor is it written that "being written in a human brain" grants any sort of normative status. So the more you talk about "fulfilling preferences", the less the subject matter of what you are discussing resembles the subject matter that other people are talking about when they talk about morality, which is about how to achieve things like fairness. But if you built a Friendly AI, you'd build it to copy "morality" out of the brains where that morality is written down, not try to manually program in things like fairness (except insofar as you were offering a temporary approximation explicitly defined as temporary). It is likewise extremely hard to get people to realize that this level of indirection, what Bostrom terms "indirect normativity", is as close as you can get to getting any piece of physical matter to compute what is right.

If you want to talk about the same thing other people are talking about when they talk about what's right, I suggest consulting William Frankena's wonderful list of some components of the large function:

"Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc."

(Just wanted to quote that so that I didn't entirely fail to talk about morality in between all this stuff about preferences and metaethics.)

Comment author: lukeprog 30 January 2011 07:57:21PM 10 points [-]

Damn. I still haven't had my "Aha!" moment on this. I'm glad that ata, at least, appears to have it, but unfortunately I don't understand ata's explanation, either.

I'll understand if you run out of patience with this exercise, but I'm hoping you won't, because if I can come to understand your meta-ethical theory, then perhaps I will be able to explain it to all the other people on Less Wrong who don't yet understand it, either.

Let me start by listing what I think I do understand about your views.

1. Human values are complex. As a result of evolution and memetic history, we humans value/desire/want many things, and our values cannot be compressed to any simple function. Certainly, we do not only value happiness or pleasure. I agree with this, and the neuroscience supporting your position is nicely summarized in Tim Schroeder's Three Faces of Desire. We can value damn near anything. There is no need to design an artificial agent to value only one thing, either.

2. Changing one's meta-ethics need not change one's daily moral behavior. You write about this here, and I know it to be true from personal experience. When deconverting from Christianity, I went from divine command theory to error theory in the course of about 6 months. About a year after that, I transitioned from error theory to what was then called "desire utilitarianism" (now called "desirism"). My meta-ethical views have shifted in small ways since then, and I wouldn't mind another radical transition if I can be persuaded. But I'm not sure yet that desirism and your own meta-ethical theory are in conflict.

3. Onlookers can agree that Jenny has 5 units of Fred::Sexiness, which can be specified in terms of curves, skin texture, etc. This specification need not mention Fred at all. As explained here.

4. Recursive justification can't "hit bottom" in "an ideal philosophy student of perfect emptiness"; all I can do is reflect on my mind's trustworthiness, using my current mind, in a process of something like reflective equilibrium, even though reflective coherence isn't specified as the goal.

5. Nothing is fundamentally moral. There is nothing that would have value if it existed in an isolated universe all by itself that contained no valuers.

Before I go on... do I have this right so far?

Comment author: Eliezer_Yudkowsky 30 January 2011 08:25:52PM 10 points [-]

1-4 yes.

5 is questionable. When you say "Nothing is fundamentally moral" can you explain what it would be like if something was fundamentally moral? If not, the term "fundamentally moral" is confused rather than untrue; it's not that we looked in the closet of fundamental morality and found it empty, but that we were confused and looking in the wrong closet.

Indeed my utility function is generally indifferent to the exact state of universes that have no observers, but this is a contingent fact about me rather than a necessary truth of metaethics, for indifference is also a value. A paperclip maximizer would very much care that these uninhabited universes contained as many paperclips as possible - even if the paperclip maximizer were outside that universe and powerless to affect its state, in which case it might not bother to cognitively process the preference.

You seem to be angling for a theory of metaethics in which objects pick up a charge of value when some valuer values them, but this is not what I think, because I don't think it makes any moral difference whether a paperclip maximizer likes paperclips. What makes moral differences are things like, y'know, life, consciousness, activity, blah blah.

Comment author: lukeprog 30 January 2011 11:08:26PM 3 points [-]

Eliezer,

In Setting Up Metaethics, you wrote:

And if you've been reading along this whole time, you know the answer isn't going to be, "Look at this fundamentally moral stuff!"

I didn't know what "fundamentally moral" meant, so I translated it to the nearest term with which I'm more familiar, what Mackie called "intrinsic prescriptivity." Or, perhaps more clearly, "intrinsic goodness," following Korsgaard:

Objects, activities, or whatever have an instrumental value if they are valued for the sake of something else - tools, money, and chores would be standard examples. A common explanation of the supposedly contrasting kind, intrinsic goodness, is to say that a thing is intrinsically good if it is valued for its own sake, that being the obvious alternative to a thing's being valued for the sake of something else. This is not, however, what the words "intrinsic value" mean. To say that something is intrinsically good is not by definition to say that it is valued for its own sake: it is to say that it has goodness in itself. It refers, one might say, to the location or source of the goodness rather than the way we value the thing. The contrast between instrumental and intrinsic value is therefore misleading, a false contrast. The natural contrast to intrinsic goodness - the value a thing has "in itself" - is extrinsic goodness - the value a thing gets from some other source. The natural contrast to a thing that is valued instrumentally or as a means is a thing that is valued for its own sake or as an end.

So what I mean to say in (5) is that nothing is intrinsically good (in Korsgaard's sense). That is, nothing has value in itself. Things only have value in relation to something else.

I'm not sure whether this notion of intrinsic value is genuinely confused or merely not-understood-by-Luke-Muehlhauser, but I'm betting it is either confused or false. ("Untrue" is the term usually used to capture a statement's being either incoherent or meaningful-and-false: see for example Richard Joyce on error theory.)

But now, I'm not sure you agree with (5) as I intended it. Do you think life, consciousness, activity, and some other things have value-in-themselves? Do these things have intrinsic value?

Thanks again for your reply. I'm going to read Chappell's comment on this thread, too.

Comment author: Eliezer_Yudkowsky 31 January 2011 05:17:43AM 10 points [-]

Do you think a heap of five pebbles is intrinsically prime, or does it get its primeness from some extrinsic thing that attaches a tag with the five English letters "PRIME" and could in principle be made to attach the same tag to composite heaps instead? If you consider "beauty" as the logical function your brain's beauty-detectors compute, then is a screensaver intrinsically beautiful?

Does the word "intrinsic" even help, considering that it invokes bad metaphysics all by itself? In the physical universe there are only quantum amplitudes. Moral facts are logical facts, but not all minds are compelled by that-subject-matter-which-we-name-"morality"; one could as easily build a mind to be compelled by the primality of a heap of pebbles.

Comment author: wedrifid 31 January 2011 07:23:19AM 0 points [-]

Good answer!

Comment author: XiXiDu 31 January 2011 11:24:53AM *  1 point [-]

So the short answer is that there are different functions that use the same labels to designate different relations while we believe that the same labels designate the same functions?

Comment author: XiXiDu 31 January 2011 10:58:28AM 1 point [-]

I wonder if Max Tegmark would have written a similar comment. I'm not sure if there is a meaningful difference regarding Luke's question to say that there are only quantum amplitudes versus there are only relations.

Comment author: Eliezer_Yudkowsky 31 January 2011 01:46:01PM 5 points [-]

What I'm saying is that in the physical world there are only causes and effects, and the primeness of a heap of pebbles is not an ontologically basic fact operating as a separate and additional element of physical reality, but it is nonetheless about as "intrinsic" to the heap of pebbles as anything.

Once morality stops being mysterious and you start cashing it out as a logical function, the moral awfulness of a murder is exactly as intrinsic as the primeness of a heap of pebbles. Just as we don't care whether pebble heaps are prime or experience any affect associated with its primeness, the Pebblesorters don't care or compute whether a murder is morally awful; and this doesn't mean that a heap of five pebbles isn't really prime or that primeness is arbitrary, nor yet that on the "moral Twin Earth" murder could be a good thing. And there are no little physical primons associated with the pebble-heap that could be replaced by compositons to make it composite without changing the number of pebbles; and no physical stone tablet on which morality is written that could be rechiseled to make murder good without changing the circumstances of the murder; but if you're looking for those you're looking in the wrong closet.

Comment author: XiXiDu 31 January 2011 02:53:10PM *  -2 points [-]

Are you arguing that the world is basically a cellular automaton and that therefore beauty is logically implied to be a property of some instance of the universe? If some agent does perceive beauty then that is a logically implied fact about the circumstances. Asking if another agent would perceive the same beauty could be rephrased as asking about the equality of the expressions of an equation?

I think a lot of people are arguing about the ambiguity of the string "beauty" as it is multiply realized.

Comment author: wedrifid 31 January 2011 01:30:13AM 0 points [-]

But now, I'm not sure you agree with (5) as I intended it. Do you think life, consciousness, activity, and some other things have value-in-themselves? Do these things have intrinsic value?

It is rather difficult to ask that question in the way you intend it. Particularly if the semantics have "because I say so" embedded rather than supplemented.

Comment author: Eugine_Nier 30 January 2011 08:33:39PM 1 point [-]

When you say "Nothing is fundamentally moral" can you explain what it would be like if something was fundamentally moral? If not, the term "fundamentally moral" is confused rather than untrue; it's not that we looked in the closet of fundamental morality and found it empty, but that we were confused and looking in the wrong closet.

BTW, in your post Are Your Enemies Innately Evil?, I think you are making a similar mistake about the concept of evil.

Comment author: ata 30 January 2011 09:45:04PM *  4 points [-]

"Innately" is being used in that post in the sense of being a fundamental personality trait or a strong predisposition (as in "Correspondance Bias", to which that post is a followup). And fundamental personality traits and predispositions do exist — including some that actually do predispose people toward being evil (e.g. sociopathy) — so, although the phrase "innately evil" is a bit dramatic, I find its meaning clear enough in that post's context that I don't think it's a mistake similar to "fundamentally moral". It's not arguing about whether there's a ghostly detachable property called "evil" that's independent of any normal facts about a person's mind and history.

Comment author: torekp 01 February 2011 01:05:45AM 0 points [-]

When you say "Nothing is fundamentally moral" can you explain what it would be like if something was fundamentally moral?

He did, by implication, in describing what it's like if nothing is:

There is nothing that would have value if it existed in an isolated universe all by itself that contained no valuers.

Clearly, many of the items on EY's list, such as fun, humor, and justice, require the existence of valuers. The question above then amounts to whether all items of moral goodness require the existence of valuers. I think the question merits an answer, even if (see below) it might not be the one lukeprog is most curious about.

Or, perhaps more clearly, "intrinsic goodness," following Korsgaard [...]

Unfortunately, lukeprog changed the terms in the middle of the discussion. Not that there is anything wrong with the new question (and I like EY's answer).

Comment author: XiXiDu 30 January 2011 08:45:26PM *  0 points [-]

I don't think it makes any moral difference whether a paperclip maximizer likes paperclips. What makes moral differences are things like, y'know, life, consciousness, activity, blah blah.

What difference would CEV make from a universe in which a Paperclip Maximizer equipped everyone with the desire to maximize paperclips? Of what difference is a universe with as many discrete consciousness entities as possible from one with a single universe-spanning consciousness?

If it doesn't make any difference, then how can we be sure that the SIAI won't just implement the first fooming AI with whatever terminal goal it desires?

I don't see how you can argue that the question "What is right?" is about the state of affairs that will help people to have more fun and yet claim that you don't think that "it makes any moral difference whether a paperclip maximizer likes paperclips"

Comment author: ata 30 January 2011 09:27:45PM *  2 points [-]

What difference would CEV make from a universe in which a Paperclip Maximizer equipped everyone with the desire to maximize paperclips? Of what difference is a universe with as many discrete consciousness entities as possible from one with a single universe-spanning consciousness?

If a paperclip maximizer modified everyone such that we really only valued paperclips and nothing else, and we then ran CEV, then CEV would produce a powerful paperclip maximizer. This is... I'm not going to say it's a feature, but it's not a bug, at least. You can't expect CEV to generate accurate information about morality if you erase morality from the minds it's looking at. (You could recover some information about morality by looking at history, or human DNA (if the paperclip maximizer didn't modify that), etc., but then you'd need a strategy other than CEV.)

I don't think I understand your second question.

I don't see how you can argue that the question "What is right?" is about the state of affairs that will help people to have more fun and yet claim that you don't think that "it makes any moral difference whether a paperclip maximizer likes paperclips"

That depends on whether the paperclip maximizer is sentient, whether it just makes paperclips or it actually enjoys making paperclips, etc. If those are the case, then its preferences matter... a little. (So let's not make one of those.)

Comment author: XiXiDu 31 January 2011 09:15:52AM *  1 point [-]

That depends on whether the paperclip maximizer is sentient, whether it just makes paperclips or it actually enjoys making paperclips, etc.

All those concepts seem to be vague. To be sentient, to enjoy. Do you need to figure out how to define those concepts mathematically before you'll be able to implement CEV? Or are you just going to let extrapolated human volition decide about that? If so, how can you possible make claims about how valuable, or how much the preference of a paperclip maximizer matter? Maybe it will all turn out to be wireheading in the end...

What is really weird is that Yudkowsky is using the word right in reference to actions affecting other agents, yet doesn't think that it would be reasonable to assign moral weight to the preferences of a paperclip maximizer.

Comment author: endoself 31 January 2011 09:56:46AM *  1 point [-]

CEV will decide. In general, it seems unlikely that the preferences of nonsentient objects will have moral value.

Edit: Looking back, this comment doesn't really address the parent. Extrapolated human volition will be used to determine which things are morally significant. I think it is relatively probable that wireheading might turn out to be morally necessary. Eliezer does think that the preferences of a paperclip maximizer would have moral value if one existed. (If a nonexistent paperclip maximizer had moral worth, so would a nonexistent paperclip minimizer. This isn't completely certain, because paperclip maximizers might gain moral significance from a property other than existence that is not shared with paperclip minimizers, but at this point, this is just speculation and we can do little better without CEV.) A nonsentient paperclip maximizer probably has no more moral value than a rock with "make paperclips" written on the side.

The reason that CEV is only based on human preferences is because, as humans, we want to create an algorithm that does what is right and humans are the only things we have that know what is right. If other species have moral value then humans, if we knew more, would care about them. If there is nothing in human minds that could motivate us to care about some specific thing, than what reason could we possibly have for designing an AI to care about that thing?

Comment author: turchin 05 February 2011 03:00:32PM -1 points [-]

near future : "you are paper clip maximazer! Kill him!"

Comment author: TheOtherDave 30 January 2011 09:15:02PM 1 point [-]

Paperclips aren't part of fun, on EY's account as I understand it, and therefore not relevant to morality or right. If paperclip maximizers believe otherwise they are simply wrong (perhaps incorrigibly so, but wrong nonetheless)... right and wrong don't depend on the beliefs of agents, on this account.

So those claims seem consistent to me.

Similarly, a universe in which a PM equipped everyone with the desire to maximize paperclips would therefore be a universe with less desire for fun in it. (This would presumably in turn cause it to be a universe with less fun in it, and therefore a less valuable universe.)

I should add that I don't endorse this view, but it does seem to be pretty clearly articulated/presented. If I'm wrong about this, then I am deeply confused.

Comment author: XiXiDu 31 January 2011 09:09:59AM *  0 points [-]

If paperclip maximizers believe otherwise they are simply wrong (perhaps incorrigibly so, but wrong nonetheless)... right and wrong don't depend on the beliefs of agents, on this account.

I don't understand how someone can arrive at "right and wrong don't depend on the beliefs of agents".

Comment author: TheOtherDave 31 January 2011 01:29:48PM 1 point [-]

I conclude that you use "I don't understand" here to indicate that you don't find the reasoning compelling. I don't find it compelling, either -- hence, my not endorsing it -- so I don't have anything more to add on that front.

Comment author: XiXiDu 31 January 2011 01:57:04PM *  -1 points [-]

If those people propose that utility functions are timeless (e.g. the Mathematical Universe), or simply an intrinsic part of the quantum amplitudes that make up physical reality (is there a meaningful difference?), then under that assumption I agree. If beauty can be captured as a logical function then women.beautiful is right independent of any agent that might endorse that function. The problem of differing tastes, differing aesthetic value, that lead to sentences like "beauty is in the eye of the beholder" are a result of trying to derive functions by the labeling of relations. There can be different functions that designate the same label to different relations. x is R-related to y can be labeled "beautiful" but so can xSy. So while some people talk about the ambiguity of the label beauty and conclude that what is beautiful is agent-dependent, other people talk about the set of functions that are labeled as beauty-function or assign the label beautiful to certain relations and conclude that their output is agent-independent.

Comment author: XiXiDu 30 January 2011 04:44:17PM *  7 points [-]

After trying to read No License To Be Human I officially give up reading the sequences for now and postpone it until I learnt a lot more. I think it is wrong to suggest that anyone can read the sequences. Either you've to be a prodigy or a post-graduate. The second comment on that post expresses my own feelings, can people actually follow Yudkowsky's posts? It's over my head.

Comment author: Dr_Manhattan 30 January 2011 05:08:28PM 5 points [-]

I agree with you sentiment, but I suggest not giving up so easily. I have the same feeling after many sequence posts, but some of them that I groked were real gems and seriously affected my thinking.

Also, borrowing some advice on reading hard papers, it's re-reading that makes a difference.

Also, as my coach put it "the best stretching for doing sidekicks is actually doing sidekicks".

Comment author: wedrifid 30 January 2011 05:19:50PM *  5 points [-]

When Gary asks "What is right?" he is asking the question e.g. "What state of affairs will help people have more fun?" and not "What state of affairs will match up with the current preferences of Gary's brain?"

I do not necessarily disagree with this, but the following:

and the proof of this is that if you offer Gary a pill to change his preferences, Gary won't take it because this won't change what is right.

... does not prove the claim. Gary would still not take the pill if the question he was asking was "What state of affairs will match up with the current preferences of Gary's brain?". A reference to the current preferences of Gary's brain is different to asking the question "What is a state of affairs in which there is a high satisfaction of the preferences in the brain of Gary?".

Comment author: XiXiDu 30 January 2011 06:00:19PM *  2 points [-]

I do not necessarily disagree with this...

It seems so utterly wrong to me that I concluded it must be me who simply doesn't understand it. Why would it be right to help people to have more fun if helping people to have more fun does not match up with your current preferences. The main reason for why I was able to abandon religion was to realize that what I want implies what is right. That still feels intuitively right. I didn't expect to see many people on LW to argue that there exist preference/(agent/mind)-independent moral statements like 'it is right to help people' or 'killing is generally wrong'. I got a similar reply from Alicorn. Fascinating. This makes me doubt my own intelligence more than anything I've so far come across. If I parse this right it would mean that a Paperclip Maximizer is morally bankrupt?

Comment author: Eugine_Nier 30 January 2011 06:29:37PM 4 points [-]

The main reason for why I was able to abandon religion was to realize that what I want implies what is right. That still feels intuitively right. I didn't expect to see many people on LW to argue that there exist preference/(agent/mind)-independent moral statements like 'it is right to help people' or 'killing is generally wrong'.

Well, something I've been noticing is that in their tell your rationalist origin stories, the reason a lot of people give for why they left their religion aren't actually valid arguments. Make of that what you will.

If I parse this right it would mean that a Paperclip Maximizer is morally bankrupt?

Yes. It is morally bankrupt. (or would you not mind turning into paperclips if that's what the Paperclip Maximizer wanted?)

BTW, your current position is more-or-less what theists mean when they say atheists are amoral.

Comment author: XiXiDu 30 January 2011 06:45:59PM *  1 point [-]

Yes. It is morally bankrupt. (or would you not mind turning into paperclips if that's what the Paperclip Maximizer wanted?)

Yes, but that is a matter of taste.

BTW, your current position is more-or-less what theists mean when they say atheists are amoral.

Why would I ever change my current position? If Yudkowsky told me there was some moral laws written into the fabric of reality, what difference would that make? Either such laws are imperative, so that I am unable to escape them, or I simply ignore them if they are opposing my preferences.

Assume all I wanted to do is to kill puppies. Now Yudkowsky told me that this is prohibited and I will suffer disutility because of it. The crucial question would be, does the disutility outweigh the utility I assign to killing puppies? If it doesn't, why should I care?

Comment author: TheOtherDave 30 January 2011 09:38:18PM *  4 points [-]

Perhaps you assign net utility to killing puppies. If you do, you do. What EY tells you, what I tell you, what is prohibited, etc., has nothing to do with it. Nothing forces you to care about any of that.

If I understand EY's position, it's that it cuts both ways: whether killing puppies is right or wrong doesn't force you to care, but whether or not you care doesn't change whether it's right or wrong.

If I understand your position, it's that what's right and wrong depends on the agent's preferences: if you prefer killing puppies, then killing puppies is right; if you don't, it isn't.

My own response to EY's claim is "How do you know that? What would you expect to observe if it weren't true?" I'm not clear what his answer to that is.

My response to your claim is "If that's true, so what? Why is right and wrong worth caring about, on that model... why not just say you feel like killing puppies?"

Comment author: XiXiDu 31 January 2011 10:14:01AM *  0 points [-]

My response to your claim is "If that's true, so what? Why is right and wrong worth caring about, on that model... why not just say you feel like killing puppies?"

I don't think those terms are useless, that moral doesn't exist. But you have to use those words with great care, because on its own they are meaningless. If I know what you want, I can approach the conditions that would be right for you. If I know how you define morality, I can act morally according to you. But I will do so only if I care about your preferences. If part of my preferences is to see other human beings happy then I have to account for your preferences to some extent, which makes them a subset of my preferences. All those different values are then weighted accordingly. Do you disagree with that understanding?

Comment author: TheOtherDave 31 January 2011 02:30:54PM 5 points [-]

I agree with you that your preferences account for your actions, and that my preferences account for my actions, and that your preferences can include a preference for my preferences being satisfied.

But I think it's a mistake to use the labels "morality" and "preferences" as though they are interchangeable.

If you have only one referent -- which it sounds like you do -- then I would recommend picking one label and using it consistently, and not use the other at all. If you have two referents, I would recommend getting clear about the difference and using one label per referent.

Otherwise, you introduce way too many unnecessary vectors for confusion.

It seems relatively clear to me that EY has two referents -- he thinks there are two things being talked about. If I'm right, then you and he disagree on something, and by treating the language of morality as though it referred to preferences you obscure that disagreement.

More precisely: consider a system S comprising two agents A and B, each of which has a set of preferences Pa and Pb, and each of which has knowledge of their own and the other's preferences. Suppose I commit an act X in S.

If I've understood correctly, you and EY agree that knowing all of that, you know enough in principle to determine whether X is right or wrong. That is, there isn't anything left over, there's no mysterious essence of rightness or external privileged judge or anything like that.

In this, both of you disagree with many other people, such as theists (who would say that you need to consult God's will to make that determination) and really really strict consequentialists (who would say that you need to consult the whole future history of the results of X to make that determination).

If I've understood correctly, you and EY disagree on symmetry. That is, if A endorses X and B rejects X, you would say that whether X is right or not is undetermined... it's right by reference to A, and wrong by reference to B, and there's nothing more to be said. EY, if I understand what he's written, would disagree -- he would say that there is, or at least could be, additonal computation to be performed on S that will tell you whether X is right or not.

For example, if A = pebblesorters and X = sorting four pebbles into a pile, A rejects X, and EY (I think) would say that A is wrong to do so... not "wrong with reference to humans," but simply wrong. You would (I think) say that such a distinction is meaningless, "wrong" is always with reference to something. You consider "wrong" a two-place predicate, EY considers "wrong" a one-place predicate -- at least sometimes. I think.

For example, if A = SHFP and B = humans and X = allowing people to experience any pain at all, A rejects X and B endorses X. You would say that X is "right_human" and "wrong_SHFP" and that whether X is right or not is insufficiently specified question. EY would say that X is right and the SHFP are mistaken.

So, I disagree with your understanding, or at least your labeling, insofar as it leads you to elide real disagreements. I endorse clarity about disagreement.

As for whether I agree with your position or EY's, I certainly find yours easier to justify.

Comment author: endoself 31 January 2011 09:09:27AM 0 points [-]

The fact that killing puppies is wrong follows from the definition of wrong. The fact that Eliezer does not want to do what is wrong is a fact about his brain, determined by introspection.

Comment author: Matt_Simpson 31 January 2011 12:41:52AM *  3 points [-]

Why would it be right to help people to have more fun if helping people to have more fun does not match up with your current preferences

Because right is a rigid designator. It refers to a specific set of terminal values. If your terminal values don't match up with this specific set of values, then they are wrong, i.e. not right. Not that you would particularly care, of course. From your perspective, you only want to maximize your own values and no others. If your values don't match up with the values defined as moral, so much for morality. But you still should be moral because should, as it's defined here, refers to a specific set of terminal values - the one we labeled "right."

(Note: I'm using the term should exactly as EY uses it, unlike in my previous comments in these threads. In my terms, should=should_human and on the assumption that you, XiXiDu, don't care about the terminal values defined as right, should_XiXiDu =/= should)

Comment author: XiXiDu 31 January 2011 09:35:30AM *  3 points [-]

I'm getting the impression that nobody here actually disagrees but that some people are expressing themselves in a very complicated way.

I parse your comment to mean that the definition of moral is a set of terminal values of some agents and should is the term that they use to designate instrumental actions that do serve that goal?

Comment author: endoself 31 January 2011 10:00:54AM 1 point [-]

Your second paragraph looks correct. 'Some agents' refers to humanity rather than any group of agents. Technically, should is the term anything should use when discussing humanity's goals, at least when speaking Eliezer.

Your first paragraph is less clear. You definitely disagree with others. There are also some other disagreements.

Comment author: XiXiDu 31 January 2011 10:19:11AM *  0 points [-]

You definitely disagree with others.

Correct, I disagree. What I wanted to say with my first paragraph was that I might disagree because I don't understand what others believe because they expressed it in a way that was too complicated for me to grasp. You are also correct that I myself was not clear in what I tried to communicate.

ETA That is if you believe that disagreement fundamentally arises out of misunderstanding as long as one is not talking about matters of taste.

Comment author: endoself 31 January 2011 06:31:05PM 2 points [-]

In Eliezer's metaethics, all disagreement are from misunderstanding. A paperclip maximizer agrees about what is right, it just has no reason to act correctly.

Comment author: Matt_Simpson 31 January 2011 03:56:06PM 0 points [-]

Yep, with the caveat that endoself added below: "should" refers to humanity's goals, no matter who is using the term (on EY's theory and semantics).

Comment author: hairyfigment 30 January 2011 10:11:50PM 1 point [-]

The main reason for why I was able to abandon religion was to realize that what I want implies what is right.

And if you modify this to say a certain subset of what you want -- the subset you'd still call "right" given omniscience, I think -- then it seems correct, as far as it goes. It just doesn't get you any closer to a more detailed answer, specifying the subset in question.

Or not much closer. At best it tells you not to worry that you 'are' fundamentally evil and that no amount of information would change that.

Comment author: Emile 30 January 2011 09:35:24PM 0 points [-]

The main reason for why I was able to abandon religion was to realize that what I want implies what is right. That still feels intuitively right. I didn't expect to see many people on LW to argue that there exist preference/(agent/mind)-independent moral statements like 'it is right to help people' or 'killing is generally wrong'.

For what it's worth, I'm also one of those people, and I never did have religion. I don't know if there's a correlation there.

Comment author: timtyler 30 January 2011 06:34:03PM *  0 points [-]

The main reason for why I was able to abandon religion was to realize that what I want implies what is right. That still feels intuitively right. I didn't expect to see many people on LW to argue that there exist preference/(agent/mind)-independent moral statements like 'it is right to help people' or 'killing is generally wrong'.

It is useful to think of right and wrong as being some agent's preferences. That agent doesn't have to be you - or even to exist IRL. If you are a sadist (no slur intended) you might want to inflict pain - but that would not make it "right" - in the eyes of conventional society.

It is fairly common to use "right" and "wrong" to describe society-level preferences.

Comment author: XiXiDu 30 January 2011 06:53:51PM 0 points [-]

If you are a sadist (no slur intended) you might want to inflict pain - but that would not make it "right" - in the eyes of conventional society.

Why would a sadistic Boltzmann brain conclude that it is wrong to be a sadistic Boltzmann brain? Whatever some society thinks is completely irrelevant to an agent with outlier preferences.

Comment author: timtyler 30 January 2011 08:12:07PM *  0 points [-]

Morality serves several functions:

  • It is a guide relating to what to do;
  • It is a guide relating to what behaviour to punish;
  • It allows for the signalling of goodness and virtue;
  • It allows agents to manipulate others, by labelling them or their actions as bad;

The lower items on the list have some significance, IMO.

Comment author: Pfft 01 February 2011 03:38:55AM 1 point [-]

Perhaps a better thought experiment, then, is to offer Gary the chance to travel back in time and feed his 2-year-old self the pill. Or, if you dislike time machines in your thought experiments, we can simply ask Gary whether or not he now would have wanted his parents to have given him the pill when he was a child. Presumably the answer will still be no.

Comment author: wedrifid 01 February 2011 03:57:08AM *  1 point [-]

If timetravel is to be considered then we must emphasize that when we say 'current preferences' we do not mean "preferences at time Time.now, whatever we can make those preferences be" but rather "I want things X, Y, Z to happen, regardless of the state of the atoms that make up me at this or any other time." Changing yourself to not want X, Y or Z will make X, Y and Z less likely to happen so you don't want to do that.

Comment author: Vladimir_Nesov 30 January 2011 10:41:06AM *  2 points [-]

Gary's preference is not itself justification, rather it recognizes moral arguments, and not because it's Gary's preference, but for its own specific reasons. Saying that "Gary's preference states that X is Gary_right" is roughly the same as "Gary should_Gary X".

(This should_T terminology was discouraged by Eliezer in the sequences, perhaps since it invites incorrect moral-relativistic thinking, as if any decision problem can be assumed as own by any other, and also makes you think of ways of referring to morality, while seeing it as a black box, instead of looking inside morality. And you have to look inside even to refer to it, but won't notice that until you stop referring and try looking.)

By saying "Gary should_gary X", do you mean that "Gary would X if Gary was fully informed and had reached a state of reflective equilibrium with regard to terminal values, moral arguments, and what Gary considers to be a moral argument"?

To a first approximation, but not quite, since it might be impossible to know what is right, for any computation not to speak of a mere human, only to make right guesses.

This makes should-statements "subjectively objective"

Every well-defined question has in a sense a "subjectively objective" answer: there's "subjectivity" in the way the question has to be interpreted by an agent that takes on a task of answering it, and "objectivity" in the rules of reasoning established by such interpretation, that makes some possible answers incorrect with respect to that abstract standard.

Or, perhaps you are saying that one cannot give a concise definition of "should,"

I don't quite see how this is opposed to the other points of your comment. If you actually start unpacking the notion, you'll find that it's a very long list. Alternatively, you might try referring to that list by mentioning it, but that's a tricky task for various reasons, including the need to use morality to locate (and precisely describe the location of) the list. Perhaps we can refer to morality concisely, but it's not clear how.

Comment author: Matt_Simpson 31 January 2011 12:13:27AM *  2 points [-]

(This should_T terminology was discouraged by Eliezer in the sequences, perhaps since it invites incorrect moral-relativistic thinking, as if any decision problem can be assumed as own by any other, and also makes you think of ways of referring to morality, while seeing it as a black box, instead of looking inside morality. And you have to look inside even to refer to it, but won't notice that until you stop referring and try looking.)

I had no idea what Eliezer was talking about originally until I started thinking in terms of should_T. Based on that and the general level of confusion among people trying to understand his metaethics, I concluded that EY was wrong - more people would understand if he talked in terms of should_T. Based on some of the back and forth here, I'm revising that opinion somewhat. Apparently this stuff is just confusing and I may just be atypical in being able to initially understand it better in those terms.

Comment author: XiXiDu 30 January 2011 01:39:32PM *  -2 points [-]

Can anything besides Gary's preferences provide a justification for saying that "Gary should_gary X"? (My own answer would be "No.")

Yes, natural laws. If Gary's preferences do not align with reality then Gary's preferences are objectively wrong'.

When people talk about morality they implicitly talk about fields like decision theory, game theory or economics. The mistake is to take an objective point of view, one similar to CEV. Something like CEV will result in some sort of game theoretic equilibrium. Yet each of us is a discrete agent that does not maximally value the extrapolated volition of other agents. People usually try to objectify, find a common ground, a compromise. This leads to all sorts of confusion between agents with maximally opposing terminal goals. In other words, if you are an outlier then there does exist no common ground and therefore something like CEV will be opposed.

ETA

' I should clarify what I mean with that sentence (if I want that people understand me).

I assume that Gary has a reward function and is the result of an evolutionary process. Gary should alter its preferences as they do not suit his reward function and decrease his fitness. I realize that in a sense I just move the problem onto another level. But if Gary's preferences can not be approached then they can be no justification for any action towards an implied goal. At that point the goal-oriented agent that is Gary will be functionally defunct and other more primitive processes will take over and consequently override Gary's preferences. In this sense reality demands that Gary should change his mind.

Comment author: Vladimir_Nesov 30 January 2011 04:10:09AM *  3 points [-]

Why consider physical facts separately? Can't they be thought of as logical facts, in the context of agent's epistemology? (You'll have lots of logical uncertainty about them, and even normative structures will look more like models of uncertainty, but still.) Is it just a matter of useful heuristic separation of the different kinds of data? (Expect not, in your theory, in some sense.)

Comment author: XiXiDu 30 January 2011 01:17:53PM *  1 point [-]

Yes, moral propositions have truth-value...

But are those truth-values intersubjectively recognizable?

The average person believes morality to be about imperative terminal goals. You ought to want that which is objectively right and good. But there does exist no terminal goal that is objectively desirable. You can assign infinite utility to any action and thereby outweigh any consequences. What is objectively verifiable is how to maximize the efficiency in reaching a discrete terminal goal.

Comment author: wedrifid 30 January 2011 01:46:49PM 1 point [-]

But there does exist no terminal goal that is objectively (intersubjectively) desirable.

If you mean intersubjectively say it. Objectively has a slightly different meaning. In particular, see 'objectively subjective'.

Comment author: XiXiDu 30 January 2011 02:23:21PM 1 point [-]

I changed it.