Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

By Which It May Be Judged

31 Post author: Eliezer_Yudkowsky 10 December 2012 04:26AM

Followup toMixed Reference: The Great Reductionist Project

Humans need fantasy to be human.

"Tooth fairies? Hogfathers? Little—"

Yes. As practice. You have to start out learning to believe the little lies.

"So we can believe the big ones?"

Yes. Justice. Mercy. Duty. That sort of thing.

"They're not the same at all!"

You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.

- Susan and Death, in Hogfather by Terry Pratchett

Suppose three people find a pie - that is, three people exactly simultaneously spot a pie which has been exogenously generated in unclaimed territory. Zaire wants the entire pie; Yancy thinks that 1/3 each is fair; and Xannon thinks that fair would be taking into equal account everyone's ideas about what is "fair".

I myself would say unhesitatingly that a third of the pie each, is fair. "Fairness", as an ethical concept, can get a lot more complicated in more elaborate contexts. But in this simple context, a lot of other things that "fairness" could depend on, like work inputs, have been eliminated or made constant. Assuming no relevant conditions other than those already stated, "fairness" simplifies to the mathematical procedure of splitting the pie into equal parts; and when this logical function is run over physical reality, it outputs "1/3 for Zaire, 1/3 for Yancy, 1/3 for Xannon".

Or to put it another way - just like we get "If Oswald hadn't shot Kennedy, nobody else would've" by running a logical function over a true causal model - similarly, we can get the hypothetical 'fair' situation, whether or not it actually happens, by running the physical starting scenario through a logical function that describes what a 'fair' outcome would look like:

So am I (as Zaire would claim) just assuming-by-authority that I get to have everything my way, since I'm not defining 'fairness' the way Zaire wants to define it?

No more than mathematicians are flatly ordering everyone to assume-without-proof that two different numbers can't have the same successor. For fairness to be what everyone thinks is "fair" would be entirely circular, structurally isomorphic to "Fzeem is what everyone thinks is fzeem"... or like trying to define the counting numbers as "whatever anyone thinks is a number". It only even looks coherent because everyone secretly already has a mental picture of "numbers" - because their brain already navigated to the referent.  But something akin to axioms is needed to talk about "numbers, as opposed to something else" in the first place. Even an inchoate mental image of "0, 1, 2, ..." implies the axioms no less than a formal statement - we can extract the axioms back out by asking questions about this rough mental image.

Similarly, the intuition that fairness has something to do with dividing up the pie equally, plays a role akin to secretly already having "0, 1, 2, ..." in mind as the subject of mathematical conversation. You need axioms, not as assumptions that aren't justified, but as pointers to what the heck the conversation is supposed to be about.

Multiple philosophers have suggested that this stance seems similar to "rigid designation", i.e., when I say 'fair' it intrinsically, rigidly refers to something-to-do-with-equal-division. I confess I don't see it that way myself - if somebody thinks of Euclidean geometry when you utter the sound "num-berz" they're not doing anything false, they're associating the sound to a different logical thingy. It's not about words with intrinsically rigid referential power, it's that the words are window dressing on the underlying entities. I want to talk about a particular logical entity, as it might be defined by either axioms or inchoate images, regardless of which word-sounds may be associated to it.  If you want to call that "rigid designation", that seems to me like adding a level of indirection; I don't care about the word 'fair' in the first place, I care about the logical entity of fairness.  (Or to put it even more sharply: since my ontology does not have room for physics, logic, plus designation, I'm not very interested in discussing this 'rigid designation' business unless it's being reduced to something else.)

Once issues of justice become more complicated and all the contextual variables get added back in, we might not be sure if adisagreement about 'fairness' reflects:

  1. The equivalent of a multiplication error within the same axioms - incorrectly dividing by 3.  (Or more complicatedly:  You might have a sophisticated axiomatic concept of 'equity', and incorrectly process those axioms to invalidly yield the assertion that, in a context where 2 of the 3 must starve and there's only enough pie for at most 1 person to survive, you should still divide the pie equally instead of flipping a 3-sided coin.  Where I'm assuming that this conclusion is 'incorrect', not because I disagree with it, but because it didn't actually follow from the axioms.)
  2. Mistaken models of the physical world fed into the function - mistakenly thinking there's 2 pies, or mistakenly thinking that Zaire has no subjective experiences and is not an object of ethical value.
  3. People associating different logical functions to the letters F-A-I-R, which isn't a disagreement about some common pinpointed variable, but just different people wanting different things.

There's a lot of people who feel that this picture leaves out something fundamental, especially once we make the jump from "fair" to the broader concept of "moral", "good", or "right".  And it's this worry about leaving-out-something-fundamental that I hope to address next...

...but please note, if we confess that 'right' lives in a world of physics and logic - because everything lives in a world of physics and logic - then we have to translate 'right' into those terms somehow.

And that is the answer Susan should have given - if she could talk about sufficiently advanced epistemology, sufficiently fast - to Death's entire statement:

You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy. And yet — Death waved a hand. And yet you act as if there is some ideal order in the world, as if there is some ... rightness in the universe by which it may be judged.

"But!" Susan should've said.  "When we judge the universe we're comparing it to a logical referent, a sort of thing that isn't in the universe!  Why, it's just like looking at a heap of 2 apples and a heap of 3 apples on a table, and comparing their invisible product to the number 6 - there isn't any 6 if you grind up the whole table, even if you grind up the whole universe, but the product is still 6, physico-logically speaking."


If you require that Rightness be written on some particular great Stone Tablet somewhere - to be "a light that shines from the sky", outside people, as a different Terry Pratchett book put it - then indeed, there's no such Stone Tablet anywhere in our universe.

But there shouldn't be such a Stone Tablet, given standard intuitions about morality.  This follows from the Euthryphro Dilemma out of ancient Greece.

The original Euthryphro dilemma goes, "Is it pious because it is loved by the gods, or loved by the gods because it is pious?" The religious version goes, "Is it good because it is commanded by God, or does God command it because it is good?"

The standard atheist reply is:  "Would you say that it's an intrinsically good thing - even if the event has no further causal consequences which are good - to slaughter babies or torture people, if that's what God says to do? If so, then it seems to me that you have no morality and that your religion has destroyed your humanity."

So if we can't make it good to slaughter babies by tweaking the state of God, then morality doesn't come from God; so goes the standard atheist argument.

But if you can't make it good to slaughter babies by tweaking the physical state of anything - if we can't imagine a world where some great Stone Tablet of Morality has been physically rewritten, and what is right has changed - then this is telling us that...

(drumroll)

...what's "right" is a logical thingy rather than a physical thingy, that's all.  The mark of a logical validity is that we can't concretely visualize a coherent possible world where the proposition is false.

And I mention this in hopes that I can show that it is not moral anti-realism to say that moral statements take their truth-value from logical entities.  Even in Ancient Greece, philosophers implicitly knew that 'morality' ought to be such an entity - that it couldn't be something you found when you ground the Universe to powder, because then you could resprinkle the powder and make it wonderful to kill babies - though they didn't know how to say what they knew.


There's a lot of people who still feel that Death would be right, if the universe were all physical; that the kind of dry logical entity I'm describing here, isn't sufficient to carry the bright alive feeling of goodness.

And there are others who accept that physics and logic is everything, but who - I think mistakenly - go ahead and also accept Death's stance that this makes morality a lie, or, in lesser form, that the bright alive feeling can't make it.  (Sort of like people who accept an incompatibilist theory of free will, also accept physics, and conclude with sorrow that they are indeed being controlled by physics.)

In case anyone is bored that I'm still trying to fight this battle, well, here's a quote from a recent Facebook conversation with a famous early transhumanist:

No doubt a "crippled" AI that didn't understand the existence or nature of first-person facts could be nonfriendly towards sentient beings... Only a zombie wouldn't value Heaven over Hell. For reasons we simply don't understand, the negative value and normative aspect of agony and despair is built into the nature of the experience itself. Non-reductionist? Yes, on a standard materialist ontology. But not IMO within a more defensible Strawsonian physicalism.

It would actually be quite surprisingly helpful for increasing the percentage of people who will participate meaningfully in saving the planet, if there were some reliably-working standard explanation for why physics and logic together have enough room to contain morality.  People who think that reductionism means we have to lie to our children, as Pratchett's Death advocates, won't be much enthused about the Center for Applied Rationality.  And there are a fair number of people out there who still advocate proceeding in the confidence of ineffable morality to construct sloppily designed AIs.

So far I don't know of any exposition that works reliably - for the thesis for how morality including our intuitions about whether things really are justified and so on, is preserved in the analysis to physics plus logic; that morality has been explained rather than explained away.  Nonetheless I shall now take another stab at it, starting with a simpler bright feeling:


When I see an unusually neat mathematical proof, unexpectedly short or surprisingly general, my brain gets a joyous sense of elegance.

There's presumably some functional slice through my brain that implements this emotion - some configuration subspace of spiking neural circuitry which corresponds to my feeling of elegance.  Perhaps I should say that elegance is merely about my brain switching on its elegance-signal?  But there are concepts like Kolmogorov complexity that give more formal meanings of "simple" than "Simple is whatever makes my brain feel the emotion of simplicity."  Anything you do to fool my brain wouldn't make the proof really elegant, not in that sense.  The emotion is not free of semantic content; we could build a correspondence theory for it and navigate to its logical+physical referent, and say:  "Sarah feels like this proof is elegant, and her feeling is true."  You could even say that certain proofs are elegant even if no conscious agent sees them.

My description of 'elegance' admittedly did invoke agent-dependent concepts like 'unexpectedly' short or 'surprisingly' general.  It's almost certainly true that with a different mathematical background, I would have different standards of elegance and experience that feeling on somewhat different occasions.  Even so, that still seems like moving around in a field of similar referents for the emotion - much more similar to each other than to, say, the distant cluster of 'anger'.

Rewiring my brain so that the 'elegance' sensation gets activated when I see mathematical proofs where the words have lots of vowels - that wouldn't change what is elegant.  Rather, it would make the feeling be about something else entirely; different semantics with a different truth-condition.

Indeed, it's not clear that this thought experiment is, or should be, really conceivable.  If all the associated computation is about vowels instead of elegance, then from the inside you would expect that to feel vowelly, not feel elegant...

...which is to say that even feelings can be associated with logical entities.  Though unfortunately not in any way that will feel like qualia if you can't read your own source code.  I could write out an exact description of your visual cortex's spiking code for 'blue' on paper, and it wouldn't actually look blue to you.  Still, on the higher level of description, it should seem intuitively plausible that if you tried rewriting the relevant part of your brain to count vowels, the resulting sensation would no longer have the content or even the feeling of elegance.  It would compute vowelliness, and feel vowelly.


My feeling of mathematical elegance is motivating; it makes me more likely to search for similar such proofs later and go on doing math.  You could construct an agent that tried to add more vowels instead, and if the agent asked itself why it was doing that, the resulting justification-thought wouldn't feel like because-it's-elegant, it would feel like because-it's-vowelly.

In the same sense, when you try to do what's right, you're motivated by things like (to yet again quote Frankena's list of terminal values):

"Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc."

If we reprogrammed you to count paperclips instead, it wouldn't feel like different things having the same kind of motivation behind it.  It wouldn't feel like doing-what's-right for a different guess about what's right.  It would feel like doing-what-leads-to-paperclips.

And I quoted the above list because the feeling of rightness isn't about implementing a particular logical function; it contains no mention of logical functions at all; in the environment of evolutionary ancestry nobody has heard of axiomatization; these feelings are about life, consciousness, etcetera.  If I could write out the whole truth-condition of the feeling in a way you could compute, you would still feel Moore's Open Question:  "I can see that this event is high-rated by logical function X, but is X really right?" - since you can't read your own source code and the description wouldn't be commensurate with your brain's native format.

"But!" you cry.  "But, is it really better to do what's right, than to maximize paperclips?"  Yes!  As soon as you start trying to cash out the logical function that gives betterness its truth-value, it will output "life, consciousness, etc. >B paperclips".  And if your brain were computing a different logical function instead, like makes-more-paperclips, it wouldn't feel better, it would feel moreclippy.

But is it really justified to keep our own sense of betterness?  Sure, and that's a logical fact - it's the objective output of the logical function corresponding to your experiential sense of what it means for something to be 'justified' in the first place.  This doesn't mean that Clippy the Paperclip Maximizer will self-modify to do only things that are justified; Clippy doesn't judge between self-modifications by computing justifications, but rather, computing clippyflurphs.

But isn't it arbitrary for Clippy to maximize paperclips?  Indeed; once you implicitly or explicitly pinpoint the logical function that gives judgments of arbitrariness their truth-value - presumably, revolving around the presence or absence of justifications - then this logical function will objectively yield that there's no justification whatsoever for maximizing paperclips (which is why I'm not going to do it) and hence that Clippy's decision is arbitrary. Conversely, Clippy finds that there's no clippyflurph for preserving life, and hence that it is unclipperiffic.  But unclipperifficness isn't arbitrariness any more than the number 17 is a right triangle; they're different logical entities pinned down by different axioms, and the corresponding judgments will have different semantic content and feel different.  If Clippy is architected to experience that-which-you-call-qualia, Clippy's feeling of clippyflurph will be structurally different from the way justification feels, not just red versus blue, but vision versus sound.

But surely one shouldn't praise the clippyflurphers rather than the just?  I quite agree; and as soon as you navigate referentially to the coherent logical entity that is the truth-condition of should - a function on potential actions and future states - it will agree with you that it's better to avoid the arbitrary than the unclipperiffic.  Unfortunately, this logical fact does not correspond to the truth-condition of any meaningful proposition computed by Clippy in the course of how it efficiently transforms the universe into paperclips, in much the same way that rightness plays no role in that-which-is-maximized by the blind processes of natural selection.

Where moral judgment is concerned, it's logic all the way down.  ALL the way down.  Any frame of reference where you're worried that it's really no better to do what's right then to maximize paperclips... well, that really part has a truth-condition (or what does the "really" mean?) and as soon as you write out the truth-condition you're going to end up with yet another ordering over actions or algorithms or meta-algorithms or something.  And since grinding up the universe won't and shouldn't yield any miniature '>' tokens, it must be a logical ordering.  And so whatever logical ordering it is you're worried about, it probably does produce 'life > paperclips' - but Clippy isn't computing that logical fact any more than your pocket calculator is computing it.

Logical facts have no power to directly affect the universe except when some part of the universe is computing them, and morality is (and should be) logic, not physics.

Which is to say:

The old wizard was staring at him, a sad look in his eyes. "I suppose I do understand now," he said quietly.

"Oh?" said Harry. "Understand what?"

"Voldemort," said the old wizard. "I understand him now at last. Because to believe that the world is truly like that, you must believe there is no justice in it, that it is woven of darkness at its core. I asked you why he became a monster, and you could give no reason. And if I could ask him, I suppose, his answer would be: Why not?"

They stood there gazing into each other's eyes, the old wizard in his robes, and the young boy with the lightning-bolt scar on his forehead.

"Tell me, Harry," said the old wizard, "will you become a monster?"

"No," said the boy, an iron certainty in his voice.

"Why not?" said the old wizard.

The young boy stood very straight, his chin raised high and proud, and said: "There is no justice in the laws of Nature, Headmaster, no term for fairness in the equations of motion. The universe is neither evil, nor good, it simply does not care. The stars don't care, or the Sun, or the sky. But they don't have to! We care! There is light in the world, and it is us!"

 

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "Standard and Nonstandard Numbers"

Previous post: "Mixed Reference: The Great Reductionist Project"

Comments (933)

Comment author: TsviBT 10 December 2012 07:37:46AM 33 points [-]

Is this a fair summary?

The answer to the clever meta-moral question, “But why should we care about morality?” is just “Because when we say morality, we refer to that-which-we-care-about - and, not to belabor the point, but we care about what we care about. Whatever you think you care about, which isn’t morality, I’m calling that morality also. Precisely which things are moral and which are not is a difficult question - but there is no non-trivial meta-question.”

Comment author: Qiaochu_Yuan 16 December 2012 01:13:04AM 15 points [-]

There is a non-trivial point in this summary, which is the meaning of "we." I could imagine a possible world in which the moral intuitions of humans diverge widely enough that there isn't anything that could reasonably be called a coherent extrapolated volition of humanity (and I worry that I already live there).

Comment author: tristanhaze 10 December 2012 06:02:33AM *  12 points [-]

Stimulating as always! I have a criticism to make of the use made of the term 'rigid designation'.

Multiple philosophers have suggested that this stance seems similar to "rigid designation", i.e., when I say 'fair' it intrinsically, rigidly refers to something-to-do-with-equal-division. I confess I don't see it that way myself [...]

What philosophers of language ordinarily mean by calling a term a rigid designator is not that, considered purely syntactically, it intrinsically refers to anything. The property of being a rigid designator is something which can be possessed by an expression in use in a particular language-system. The distinction is between expressions-in-use whose reference we let vary across counterfactual scenarios (or 'possible worlds'), e.g. 'The first person to climb Everest', and those whose reference remains stable, e.g. 'George Washington', 'The sum of two and two'.

There is some controversy over how to apply the rigid/non-rigid distinction to general terms like 'fair' (or predicates like 'is fair') - cf. Scott Soames' book Beyond Rigidity - but I think the natural thing to say is that 'is fair' is rigid, since it is used to attribute the same property across counterfactual scenarios, in contrast with a predicate like 'possesses my favourite property'.

Comment author: crazy88 10 December 2012 07:19:01AM *  9 points [-]

Multiple philosophers have suggested that this stance seems similar to "rigid designation", i.e., when I say 'fair' it intrinsically, rigidly refers to something-to-do-with-equal-division. I confess I don't see it that way myself - if somebody thinks of Euclidean geometry when you utter the sound "num-berz" they're not doing anything false, they're associating the sound to a different logical thingy. It's not about words with intrinsically rigid referential power, it's that the words are window dressing on the underlying entities.

I just wanted to agree with Tristanhaze here that this usage strikes me as non-standard. I want to put this in my own words so that Tristanhaze/Eliezer/others can correct me if I've got the wrong end of the stick.

If something is a rigid designator it means that it refers to the same thing in all possible worlds. To say it's non-rigid is to say it refers to different things in some possible worlds to others. This has nothing to do with whether different language users that use the phrase must always be referring to the same thing. So George Washington may be a rigid designator in that it refers to the same person in all possible worlds (bracketing issues of transworld identity) but that doesn't mean that in all possible worlds that person is called George Washington or that in all possible worlds people who use the name George Washington must be referring to this person or even that in the actual world all people who use the name George Washington must be referring to this person.

To say "water" is a rigid designator is to say that whatever possible world I am talking about, I am picking out the same thing when I use the word water (in a way that I wouldn't be when I say, "the tallest person in the world" - this would pick out different things in different worlds). But it doesn't say anything about whether I mean the same thing as other language users in this or other possible worlds.

ETA: So the relevance to the quoted section is that rigid designators aren't about whether someone that thinks of Euclidean geometry when you say "numbers" is right or wrong - it's about whether whatever they associate with that word is the same thing in all possible worlds (or whether it's a different thing in some worlds).

ETA 2: I take it that Eliezer's paragraph here is in response to comments like these. I'm in a bit of a rush and need to think about it some more but I think Richard may be making a different point here to the one Eliezer's making (on my reading). I think Richard is saying that what is "right" is rigidly determined by my current (idealised) desires - so in a possible world where I desired to murder, murder would still be wrong because "right" is a rigid designator (that is, right from the perspective of my language, a different language user - like the me that desires murder - might still use "right" to refer to something else according to which murder is right. See the point about George Washington being able to be rigid even if people in other possible worlds use that name to name someone else). On the other hand, my reading of Eliezer was that he was taking the claim that "right" (or "fair") is a rigid designator to mean something about the way different language users use the word "fair". Eliezer seemed to be suggesting that rigid designation implied that words intrinsically mean certain things and hence that rigid designation implies that if someone uses a word in a different way they are wrong (using numbers to refer to geometry). I could have misunderstood either of these two comments but if I haven't then it seems to me that Eliezer is using rigid designator in a non-standard way.

Comment author: RichardChappell 10 December 2012 05:37:36PM *  6 points [-]

Correct. Eliezer has misunderstood rigid designation here.

Comment author: Qiaochu_Yuan 16 December 2012 12:45:16AM *  3 points [-]

Can you give an example of a rigid designator (edit: that isn't purely mathematical / logical)? I don't understand how the concept is even coherent right now. "Issues of transworld identity" seem to be central and I don't know why you're sweeping them under the rug. More precisely, I do not understand how one goes about identifying objects in different possible worlds even in principle. I think that intuitions about this procedure are likely to be flawed because people do not consider possible worlds that are sufficiently different.

Comment author: crazy88 16 December 2012 06:43:47AM 1 point [-]

Okay, so three things are worth clarifying up front. First, this isn't my area of expertise so anything I have to say about the matter should be taken with a pinch of salt. Second, this is a complex issue and really would require 2 or 3 sequences of material to properly outline so I wouldn't read too much into the fact that my brief comment doesn't present a substantive outline of the issue. Third, I have no settled views on the issues of rigid designators, nor am I trying to argue for a substantive position on the matter so I'm not deliberately sweeping anything under the rug (my aim was to distinguish Eliezer's use of the phrase rigid designator from the standard usage and doing so doesn't require discussion of transworld identity: Eliezer was using it to refer to issues relating to different people whereas philosophers use it to refer to issues relating to a single person - or at least that's the simplified story that captures the crucial idea).

All that said, I'll try to answer your question. First, it might help to think of rigid designators as cases where the thing to be identified isn't simply to be identified with its broad role in the world. So "the inventor of bifocals" is the person that plays a certain role in the world - the role of inventing bifocals. So "the inventor of bifocals" is not a rigid designator. So the heuristic for identifying rigid designators is that they can't just be identified by their role in the world.

Given this, what are some examples of rigid designators? Well, the answer to this question will depend on who you ask. A lot of people, following Putnam would take "water" (and other natural kind terms) to be a rigid designator. On this view, "Water" rigidly refers to H2O, regardless of whether H20 plays the "water" role in some other possible world. So imagine a possible world where some other substance, XYZ, falls from the sky, sakes thirst, fill rivers and so on (that is, XYZ fills the water role in this possible world). On the rigid designation view, XYZ would not be water. So there's one example of a rigid designator (on one view).

Kripke (in his book naming and necessity) defends the view that names are rigid designators - so the name "Thomas Jefferson" refers to the same person in all possible worlds (this is where issues of transworld identity become relevant). This is meant to be contrasted with a view according to which the name "John Lennon" refers to the nearest and near enough realiser of a certain description ("lead singer of the Beatles, etc). So on this view, there are possible worlds where John Lennon is not the lead singer of the Beatles, even though the Beatles formed and had a singer that met many of the other descriptive features of John (born in the same town and so on).

Plausibly, what you take to be a rigid designator will depend on what you take possible worlds to be and what views you have on transworld identity. Note that your comment that it seems difficult to imagine how you could go about identifying objects in different possible worlds even in principle makes a very strong assumption about the metaphysics of possible worlds. For example, this difficulty would be most noticeable if possible worlds were concrete things that were causally distinct from us (as Lewis would hold). One major challenge to Lewis's view is just this challenge. However, very few philosophers actually agree with Lewis.

So what are some other views? Well Kripke thinks that we simply stipulate possible worlds (as I said, this isn't my area so I'm not entirely clear what he takes possible worlds to be - maximally consistent sets of sentences, perhaps - if anyone knows, I'd love to have this point clarified). That is, we say, "consider the possible world where Bill Gates won the presidency". As Kripke doesn't hold that possible worlds are real concrete entities, this stipulation isn't necessarily problematic. On Kripke's view, then, the problem of transworld identity is easy to solve.

More precisely, I do not understand how one goes about identifying objects in different possible worlds even in principle. I think that intuitions about this procedure are likely to be flawed because people do not consider possible worlds that are sufficiently different.

I don't have the time to go into more detail but it's worth noting that your comment about intuition is an important point depending on your view of what possible worlds are. However, there's definitely an overarching challenge to views according to which we should rely on our intuitions to determine what is possible.

Hope that helps clarify.

Comment author: Qiaochu_Yuan 16 December 2012 07:43:15AM *  2 points [-]

Thank you for the clarification. I agree that the question of what a possible world is is an important one, but the answer seems obvious to me: possible worlds are things that live inside the minds of agents (e.g. humans).

Water is one of the examples I considered and found incoherent. Once you start considering possible worlds with different laws of physics, it's extremely unclear to me in what sense you can identify types of particles in one world with particles in another type of world. I could imagine doing this by making intuitive identifications step by step along "paths" in the space of possible worlds, but then it's unclear to me how you could guarantee that the identifications you get this way are independent of the choice of path (this idea is motivated by a basic phenomenon in algebraic topology and complex analysis).

Comment author: Eliezer_Yudkowsky 10 December 2012 07:19:26PM 3 points [-]

I'd like to say "sure" and then delete that paragraph, but then somebody else in the comments will say that my essay is just talking about a rigid-designation theory of morality. I mean, that's the comment I've gotten multiple times previously. Anyone got a good idea for resolving this?

Comment author: crazy88 10 December 2012 09:01:56PM *  4 points [-]

You may have resolved this now by talking to Richard (who knows more about this than me) but, in case you haven't, I'll have a shot at it.

First, the distinction: Richard is using rigid designation to talk about how a single person evaluates counterfactual scenarios, whereas you seem to be taking it as a comment about how different people use the same word.

Second, relevance: Richard's usage allow you to respond to an objection. The objection asks you to consider the counterfactual situation where you desire to murder people and says murder must now be right so the theory is extremely subjective. You can respond that "right" is a rigid designator so it is still right to not murder in this counterfactual situation (though your counterpart here will use the word "right" differently).

Suggestion: perhaps edit the paragraph so as to discuss either this objection and defence or outline why the rigid designator view so characterised is not your view.

Comment author: dspeyer 10 December 2012 05:05:41AM 9 points [-]

I'm trying to understand this, and I'm trying to do it by being a little more concrete.

Suppose I have a choice to make, and my moral intuition is throwing error codes. I have two axiomations of morality that are capable of examining the choice, but they give opposite answers. Does anything in this essay help? If not, is there a future essay planned that will?

In a universe that contains a neurotypical human and clippy, and they're staring at eachother, is there an asymmetry?

Comment author: Eliezer_Yudkowsky 10 December 2012 06:39:51AM 6 points [-]

Can you be more concrete? Some past or present actual situation?

Comment author: dspeyer 10 December 2012 06:14:24PM 6 points [-]

My actual situations are too complicated and I don't feel comfortable discussing them on the internet. So here's a fictional situation with real dilemmas.

Suppose I have a friend who is using drugs to self-destructive levels. This friend is no longer able to keep a job, and I've been giving him couch-space. With high probability, if I were to apply pressure, I could decrease his drug use. One axiomization says I should consider how happy he will be with an outcome, and I believe he'll be happier once he's sober and capable of taking care of himself. Another axiomization says I should consider how much he wants a course of action, and I believe he'll be angry at my trying to run his life.

As a further twist, he consistently says different things depending on which drugs he's on. One axiomization defines a person such that each drug-cocktail-personality is a separate person whose desires have moral weight. Another axiomization defines a person such that my friend is one person, but the drugs are making it difficult for him to express his desires -- the desires with moral weight are the ones he would have if he were sober (and it's up to me to deduce them from the evidence available).

Comment author: Qiaochu_Yuan 21 December 2012 02:13:21AM *  1 point [-]

My response to this situation depends on how he's getting money for drugs given that he no longer has a job and also on how much of a hassle it is for you to give him couch-space. If you don't have the right to run his life, he doesn't have the right to interfere in yours (by taking up your couch, asking you for drug money, etc.).

I am deeply uncomfortable with the drug-cocktail-personalities-as-separate-people approach; it seems too easily hackable to be a good foundation for a moral theory. It's susceptible to a variant of the utility monster, namely a person who takes a huge variety of drug cocktails and consequently has a huge collection of separate people in his head. A potentially more realistic variant of this strategy might be to start a cult and to claim moral weight for your cult's preferences once it grows large enough...

(Not that I have any particular cult in mind while saying this. Hail Xenu.)

Edit: I suppose your actual question is how the content of this post is relevant to answering such questions. I don't think it is, directly. Based on the subsequent post about nonstandard models of Peano arithmetic, I think Eliezer is suggesting an analogy between the question of what is true about the natural numbers and the question of what is moral. To address either question one first has to logically pinpoint "the natural numbers" and "morality" respectively, and this post is about doing the latter. Then one has to prove statements about the things that have been logically pointed to, which is a difficult and separate question, but at least an unambiguously meaningful one once the logical pinpointing has taken place.

Comment author: AlanCrowe 10 December 2012 12:10:21PM 7 points [-]

Haiti today is a situation that makes my moral intuition throw error codes. Population density is three times that of Cuba. Should we be sending aid? It would be kinder to send helicopter gunships and carry out a cull. Cut the population back to one tenth of its current level, then build paradise. My rival moral intuition is that culling humans is always wrong.

Trying to stay concrete and present, should I restrict my charitable giving to helping countries make the demographic transition? Within a fixed aid budget one can choose package A = (save one child, provide education, provide entry into global economy; 30 years later the child, now an adult, feeds his own family and has some money left over to help others) package B = (save four children; that's it, money all used up, thirty years later there are 16 children needing saving and its not going to happen). Concrete choice of A over B: ignore Haiti and send money to Karuna trust to fund education for untouchables in India, preferring to raise a few children out of poverty by letting other children die.

Comment author: Nornagest 10 December 2012 09:03:57PM *  17 points [-]

Population density is three times that of Cuba.

It's also about half that of Taiwan, significantly less than South Korea or the Netherlands, and just above Belgium, Israel, and Japan -- as well as very nearly on par with India, the country you're using as an alternative! I suspect your source may have overweighted population density as a factor in poor social outcomes.

Comment author: NancyLebovitz 10 December 2012 08:47:56PM *  12 points [-]

Is permitting or perhaps even helping Haitians to emigrate to other countries anywhere in the moral calculus?

Comment author: [deleted] 13 December 2012 08:30:09PM 8 points [-]

It would be kinder to send helicopter gunships and carry out a cull. Cut the population back to one tenth of its current level...

So you're facing a moral dilemma between giving to charity and murdering nine million people? I think I know what the problem might be.

Comment author: AlanCrowe 14 December 2012 01:29:35PM 0 points [-]

My original draft contained a long ramble about permanent Malthusian immiseration. History is a bit of a race. Can society progress fast enough to reach the demographic transition? Or does population growth redistribute all the gains in GDP so that individuals get poorer, life gets harder, the demographic transition doesn't happen,... If I were totally evil and wanted to fuck over as many people as a could, as hard as a I could, my strategy for maximum holocaust is as follows.

  • Establish free mother-and-baby clinics
  • Provide free food for the under fives
  • Leverage the positive reputation from the first two to promote religions that oppose contraception
  • Leverage religious faith to get contraception legally prohibited

If I can get population growth to out run technological gains in productivity I can engineer a Limits to growth style crash. That will be vastly worse than any wickedness that I could be work by directly harming people.

Unfortunately, I had been reading various articles discussing the 40th Anniversary of the publication of the Limits to Growth book. So I deleted the set up for the moral dilemma from my comment, thinking that my readers will be over-familiar with concerns about permanent Malthusian immiseration, and pick up immediately on "aid as sabotage", and the creation of permanent traps.

My original comment was a disaster, but since I'm pig-headed I'm going to have another go at saying what it might mean for ones moral intuitions to throw error codes:

Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...

Comment author: MugaSofer 14 December 2012 01:43:31PM *  7 points [-]

Really? That's your plan for "maximum holocaust"? You'll do more good than harm in the short run, and if you run out of capital (not hard with such a wastefully expensive plan) then you'll do nothing but good.

This sounds to me like a political applause light, especially

  • Leverage the positive reputation from the first two to promote religions that oppose contraception
  • Leverage religious faith to get contraception legally prohibited

In essence, your statement boils down to "if I wanted to do the most possible harm, I would do what the Enemy are doing!" which is clearly a mindkilling political appeal.

(For reference, here's my plan for maximum holocaust: select the worst things going on in the world today. Multiply their evil by their likelihoods of success. Found a terrorist group attacking the winners. Be careful to kill lots of civilians without actually stopping your target.)

Comment author: [deleted] 14 December 2012 01:54:04PM *  2 points [-]

Hopefully this comment was intended as non-obvious form of satire, otherwise it's completely nonsensical.

You're - Mr. AlanCrowe that is - mixing up aid that prevents temporary suffering to lack of proper longterm solutions. As the saying goes:

"Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime."

You're forgetting the "teach a man to fish" part entirely. Which should be enough - given the context - to explain what's wrong with your reasoning. I could go on explaining further, but I don't want to talk about such heinous acts, the ones you mentioned, unecessarily.

EDIT: Alright sorry I overlooked the type of your mistake slightly because I had an answer ready and recognized a pattern so your mistake wasn't quite that skindeep.

In anycase I think it's extremely insensitive and rash to poorly excuse yourself of atrocities like these:

It would be kinder to send helicopter gunships and carry out a cull. Cut the population back to one tenth of its current level, then build paradise.

In anycase you falsely created a polarity between different attempts of optimizing charity here:

A = (save one child, provide education, provide entry into global economy; 30 years later the child, now an adult, feeds his own family and has some money left over to help others) package B = (save four children; that's it, money all used up, thirty years later there are 16 children needing saving and its not going to happen).

And then by means of trickery. you transformed it into "being unsympathetic now" + "sympathetic later" > "sympathetic now" > "more to be sympathetic about later"

However in the really real world each unnecessary death prevented counts, each starving child counts, at least in my book. If someone suffers right now in exchange for someone else not suffering later - nothing is gained.

Which to me looks like you're just eager to throw sympathy out the window in hopes of looking very rational in contrast. And with this false trickery you've made it look like these suffering people deserve what they get and there's nothing you can do about it. You could also accompany options A and B with option C "Save as many children as possible and fight harder to raise money for schools and infrastructure as well" not to mention that you can give food to people who are building those schools and it's not a zero-sum game.

Comment author: gwern 16 December 2012 04:20:46AM 1 point [-]

Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...

I'm afraid Franken Fran beat you to this story a while ago.

Comment author: Eugine_Nier 16 December 2012 04:04:44AM *  1 point [-]

Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...

I would be very happy that Dr. Evil appears to be maximally incompetent.

Seriously, why are you basing your analysis on a 40 year old book whose predictions have failed to come true?

Comment author: Eliezer_Yudkowsky 10 December 2012 07:17:10PM 17 points [-]

I don't see how these two frameworks are appealing to different terminal values - they seem to be arguments about which policies maximize consequential lives-saved over time, or maximize QALYs (Quality-Adjusted Life Years) over time. This seem like a surprisingly neat and lovely illustration of "disagreeing moral axioms" that turn out to be about instrumental policies without much in the way of differing terminal values, hence a dispute of fact with a true-or-false answer under a correspondence theory of truth for physical-universe hypotheses.

Comment author: army1987 12 December 2012 01:39:36PM 2 points [-]

ISTM he's not quite sure whether one QALY thirty years from now should be worth as much as one QALY now.

Comment author: JoachimSchipper 13 December 2012 08:25:18AM 1 point [-]

(Are you sure you want this posted under what appears to be a real name?)

Comment author: MugaSofer 13 December 2012 09:34:29AM 4 points [-]

Don't be absurd. How could advocating population control via shotgun harm one's reputation?

Comment author: nshepperd 10 December 2012 05:59:18AM 5 points [-]

Suppose I have a choice to make, and my moral intuition is throwing error codes. I have two axiomations of morality that are capable of examining the choice, but they give opposite answers.

If you're not sure which of two options is better, the only thing that will help is to think about it for a long time. (Note: if you "have two axiomatizations of morality", and they disagree, then at most one of them accurately describes what you were trying to get at when you attempted to axiomatize morality. To work out which one is wrong, you need to think about them for ages until you notice that one of them says something wrong.)

In a universe that contains a neurotypical human and clippy, and they're staring at eachother, is there an asymmetry?

Yes, the human is better. Why? Because the human cares about what is better. In contrast to clippy, who just cares about what is paperclippier.

Comment author: army1987 10 December 2012 03:06:04PM 6 points [-]

Yes, the human is better. Why? Because the human cares about what is better. In contrast to clippy, who just cares about what is paperclippier.

And the clippy is clippier. Why? Because the clippy cares about what is clippier. In contrast to the human, who just cares about what is better.

Comment author: nshepperd 10 December 2012 03:47:19PM 3 points [-]

Indeed. However, a) betterness is obviously better than clippiness, and b) if dspeyer is anything like a typical human being, the implicit question behind "is there an asymmetry?" was "is one of them better?"

Comment author: Sengachi 21 December 2012 08:45:13AM 1 point [-]

And clippiness is obviously more clipperific. That doesn't actually answer the question.

Comment author: JonCB 11 December 2012 03:18:06AM 1 point [-]

What is your evidence for stating that human-betterness is "obviously better" than clippy-betterness? Your comment reads to me you're either arguing that 3 > Potato or that there exists a universally compelling argument. I could however be wrong.

Comment author: nshepperd 11 December 2012 04:28:44AM *  4 points [-]

"Human-betterness" and "clippy-betterness" are confused terminology. There's only betterness and clippiness. Clippiness is not a type of betterness. Humans generally care about betterness, paperclippers care about clippiness. You can't argue a paperclipper into caring about betterness.

I said that betterness is better than clippiness. This should be obvious, since it's a tautology.

Comment author: Sengachi 21 December 2012 08:44:27AM 2 points [-]

Ah, but Clippy is far more clipperific, and so will do more clippy things. Better is not clippy, why should it matter?

Comment author: Viliam_Bur 22 December 2012 02:54:39PM 5 points [-]

Perhaps it would help to taboo "symmetry", or at least to say what kind of... uhm, mapping... do we really expect here. Just some way to play with words, or something useful? How specifically useful?

Saying "humans : better = paperclips maximizers : more clippy" would be a correct answer in a test of verbal skills. Just be careful not to add a wrong connotation there.

Because saying "...therefore 'better' and 'more clippy' are just two different ways of being better, for two different species" would be a nonsense, exactly like saying "...therefore 'more clippy' and 'better' are just two different ways of being more clippy, for two different species". No, being better is not a homo sapiens way to produce the most paperclips. And being more clippy is not a paperclip maximizer way to produce the most happiness (even for the paperclip maximizers).

Comment author: JonCB 10 December 2012 01:15:07PM *  2 points [-]

I am confused by what you mean by "better" here. Your statement makes sense to me if i replace better with "humanier"(more humanly? more human-like? Not humane... too much baggage). Is that what you mean?

Comment author: MrMind 10 December 2012 03:16:18PM 2 points [-]

Does anything in this essay help?

Probably this could (not) help

"And I quoted the above list because the feeling of rightness isn't about implementing a particular logical function; it contains no mention of logical functions at all; in the environment of evolutionary ancestry nobody has heard of axiomatization; these feelings are about life, consciousness, etcetera"

In a universe that contains a neurotypical human and clippy, and they're staring at eachother, is there an asymmetry?

An asymmetry in what?

Comment author: Qiaochu_Yuan 10 December 2012 05:08:17AM 2 points [-]

Why do you have two axiomatizations of morality? Where did they come from? Is there a reason to suspect one or both of their sources?

Comment author: dspeyer 10 December 2012 06:40:25AM 5 points [-]

Because aximatizations are hard. I tried twice. And probably messed up both times, but in different ways.

The axiomatizations are internally complete and consistent, so I understand two genuine logical objects, and I'm trying to understand which to apply.

(Note: my actual map of morality is more complicated and fuzzy -- I'm simplifying for sake of discussion)

Comment author: Benito 12 December 2012 06:38:49AM *  1 point [-]

If one a single agent has conflicting desires (each of which it values equally) then it should work to alter its desires, so it chooses consistent desires that are most likely to be fulfilled.

To your latter question though, I think that what you're asking is "If two agents have utility functions that clash, which one is to be preferred?" Is it that all we can say is "Whichever one has the most resources and most optimisation power/intelligence will be able to put its goals into action and prevent the other one from fully acting upon its"?

Well, I think that the point Eliezer has talked about a few times before is that there is no ultimate morality, written into the universe that will affect any agent so as to act it out. You can't reason with an agent which has a totally different utility function. The only reason that we can argue with humans is that they're only human, and thus we share many desires. Figuring out morality isn't going to give you the powers to talk down Clippy from killing you for more paper clips. You aren't going to show how human 'morality', which actualises what humans prefer, is any more preferable than 'Clippy' ethics. He is just going to kill you.

So, let's now figure out exactly what we want most, (if we had our own CEV) and then go out and do it. Nobody else is gonna do it for us.

EDIT: First sentence 'conflicting desires'; I meant to say 'in principle unresolvable' like 'x' and '~x'. Of course, for most situations, you have multiple desires that clash, and you just have to perform utility calculations to figure out what to do.

Comment author: CCC 12 December 2012 08:32:30AM 2 points [-]

You can't reason with an agent which has a totally different utility function. The only reason that we can argue with humans is that they're only human, and thus we share many desires.

If you know (or correctly guess) the agents' utility function, and are able to communicate with it, then it may well be possible to reason with it.

Consider this situation; I am captured by a Paperclipper, which wishes to extract the iron from my blood and use it to make more paperclips (incidentally killing me in the process). I can attempt to escape by promising to send to the Paperclipper a quantity of iron - substantially more than can be found in my blood, and easier to extract - as soon as I am safe. As long as I can convince Clippy that I will follow through on my promise, I have a chance of living.

I can't talk Clippy into adopting my own morality. But I can talk Clippy into performing individual actions that I would prefer Clippy to do (or into refraining from other actions) as long as I ensure that Clippy can get more paperclips by doing what I ask than by not doing what I ask.

Comment author: Benito 12 December 2012 09:14:59AM 1 point [-]

Of course - my mistake. I meant that you can't alter an agent's desires by reason alone. You can't appeal to desires you have. You can only appeal to its desires. So, when he's going to turn the your blood iron into paperclips, and you want to live, you can't try "But I want to live a long and happy life!". If Clippy hasn't got empathy, and you have nothing to offer that will help fulfill his own desires, then there's nothing to be done, other than try to physical stop or kill him.

Maybe you'd be happier if you put him in a planet of his own, where a machine constantly destroye paperclips, and he was happy making new ones. My point is just that, if you do decide to make him happy, it's not the optimal decision relative to a universal preference, or morality. It's just the optimal decision relative to your desires. Is that 'right'? Yes. That's what we refer to, when we say 'right'.

Comment author: MixedNuts 10 December 2012 09:27:23AM 24 points [-]

The standard religious reply to the baby-slaughter dilemma goes something like this:

Sure, if G-d commanded us to slaughter babies, then killing babies would be good. And if "2+2=3" was a theorem of PA, then "2+2=3" would be true. But G-d logically cannot command us to do a bad thing, anymore than PA can prove something that doesn't follow from its axioms. (We use "omnipotent" to mean "really really powerful", not "actually omnipotent" which isn't even a coherent concept. G-d can't make a stone so heavy he can't lift it, draw a square circle, or be evil.) Religion has destroyed my humanity exactly as much as studying arithmetic has destroyed your numeracy. (Please pay no attention to the parts of the Bible where G-d commands exactly that.)

Comment author: Eliezer_Yudkowsky 10 December 2012 07:10:13PM 4 points [-]

Sure, and to the extent that somebody answers that way, or for that matter runs away from the question, instead of doing that thing where they actually teach you in Jewish elementary school that Abraham being willing to slaughter Isaac for God was like the greatest thing ever and made him deserve to be patriarch of the Jewish people, I will be all like, "Oh, so under whatever name, and for whatever reason, you don't want to slaughter children - I'll drink to that and be friends with you, even if the two of us think we have different metaethics justifying it". I wasn't claiming that accepting the first horn of the dilemma was endorsed by all theists or a necessary implication of theism - but of course, the rejectance of that horn is very standard atheism.

Comment author: MixedNuts 10 December 2012 07:35:14PM 17 points [-]

I don't think it's incompatible. You're supposed to really trust the guy because he's literally made of morality, so if he tells you something that sounds immoral (and you're not, like, psychotic) of course you assume that it's moral and the error is on your side. Most of the time you don't get direct exceptional divine commands, so you don't want to kill any kids. Wouldn't you kill the kid if an AI you knew to be Friendly, smart, and well-informed told you "I can't tell you why right now, but it's really important that you kill that kid"?

If your objection is that Mr. Orders-multiple-genocides hasn't shown that kind of evidence he's morally good, well, I got nuthin'.

Comment author: RobbBB 10 December 2012 09:29:23PM *  11 points [-]

You're supposed to really trust the guy because he's literally made of morality, so if he tells you something that sounds immoral (and you're not, like, psychotic) of course you assume that it's moral and the error is on your side.

What we have is an inconsistent set of four assertions:

  1. Killing my son is immoral.
  2. The Voice In My Head wants me to kill my son.
  3. The Voice In My Head is God.
  4. God would never want someone to perform an immoral act.

At least one of these has to be rejected. Abraham (provisionally) rejects 1; once God announces 'J/K,' he updates in favor of rejecting 2, on the grounds that God didn't really want him to kill his son, though the Voice really was God.

The problem with this is that rejecting 1 assumes that my confidence in my foundational moral principles (e.g., 'thou shalt not murder, self!') is weaker than my confidence in the conjunction of:

  • 3 (how do I know this Voice is God? the conjunction of 1,2,4 is powerful evidence against 3),
  • 2 (maybe I misheard, misinterpreted, or am misremembering the Voice?),
  • and 4.

But it's hard to believe that I'm more confident in the divinity of a certain class of Voices than in my moral axioms, especially if my confidence in my axioms is what allowed me to conclude 4 (God/morality identity of some sort) in the first place. The problem is that I'm the one who has to decide what to do. I can't completely outsource my moral judgments to the Voice, because my native moral judgments are an indispensable part of my evidence for the properties of the Voice (specifically, its moral reliability). After all, the claim is 'God is perfectly moral, therefore I should obey him,' not 'God should be obeyed, therefore he is perfectly moral.'

Comment author: MixedNuts 10 December 2012 09:52:45PM 6 points [-]

Well, deities should make themselves clear enough that (2) is very likely (maybe the voice is pulling your leg, but it wants you to at least get started on the son-killing). (3) is also near-certain because you've had chats with this voice for decades, about moving and having kids and changing your name and whether the voice should destroy a city.

So this correctly tests whether you believe (4) more than (1) - whether your trust in G-d is greater than your confidence in your object-level judgement.

You're right that it's not clear why Abraham believes or should believe (4). His culture told him so and the guy has mostly done nice things for him and his wife, and promised nice things then delivered, but this hardly justifies blind faith. (Then again I've trusted people on flimsier grounds, if with lower stakes.) G-d seems very big on trust so it makes sense that he'd select the president of his fan club according to that criterion, and reinforce the trust with "look, you trusted me even though you expected it to suck, and it didn't suck".

Comment author: RobbBB 10 December 2012 10:35:49PM *  11 points [-]

Well, if we're shifting from our idealized post-Protestant-Reformation Abraham to the original Abraham-of-Genesis folk hero, then we should probably bracket all this Medieval talk about God's omnibenevolence and omnipotence. The Yahweh of Genesis is described as being unable to do certain things, as lacking certain items of knowledge, and as making mistakes. Shall not the judge of all the Earth do right?

As Genesis presents the story, the relevant question doesn't seem to be 'Does my moral obligation to obey God outweigh my moral obligation to protect my son?' Nor is it 'Does my confidence in my moral intuitions outweigh my confidence in God's moral intuitions plus my understanding of God's commands?' Rather, the question is: 'Do I care more about obeying God than about my most beloved possession?' Notice there's nothing moral at stake here at all; it's purely a question of weighing loyalties and desires, of weighing the amount I trust God's promises and respect God's authority against the amount of utility (love, happiness) I assign to my son.

The moral rights of the son, and the duties of the father, are not on the table; what's at issue is whether Abraham's such a good soldier-servant that he's willing to give up his most cherished possessions (which just happen to be sentient persons). Replace 'God' with 'Satan' and you get the same fealty calculation on Abraham's part, since God's authority, power, and honesty, not his beneficence, are what Abraham has faith in.

Comment author: Alejandro1 10 December 2012 09:48:03PM 2 points [-]

The problem has the same structure for MixedNuts' analogy of the FAI replacing the Voice. Suppose you program the AI to compute explicitly the logical structure "morality" that EY is talking about, and it tells you to kill a child. You could think you made a mistake in the program (analogous to rejecting your 3), or that you are misunderstanding the AI or hallucinating it (rejecting 2). And in fact for most conjunctions of reasonable empirical assumptions, it would be more rational to take any of these options than to go ahead and kill the child.

Likewise, sensible religionists agree that if someone hears voices in their head telling them to kill children, they shouldn't do it. Some of they might say however that Abraham's position was unique, that he had especially good reasons (unspecified) to accept 2 and 3, and that for him killing the child is the right decision. In the same way, maybe an AI programmer with very strong evidence for the analogies for 2 and 3 should go ahead and kill the child. (What if the AI has computed that the child will grow up to be Hitler?)

Comment author: RobbBB 10 December 2012 10:13:44PM *  2 points [-]

A few religious thinkers (Kierkegaard) don't think Abraham's position was completely unique, and do think we should obey certain Voices without adequate evidence for 4, perhaps even without adequate evidence for 3. But these are outlier theories, and certainly don't reflect the intuitions of most religious believers, who pay more lip service to belief-in-belief than actual service-service to belief-in-belief.

I think an analogous AI set-up would be:

  1. Killing my son is immoral.
  2. The monitor reads 'Kill your son.'
  3. The monitor's display perfectly reflects the decisions of the AI I programmed.
  4. I successfully programmed the AI to be perfectly moral.

What you call rejecting 3 is closer to rejecting 4, since it concerns my confidence that the AI is moral, not my confidence that the AI I programmed is the same as the entity outputting 'Kill your son.'

Comment author: MugaSofer 11 December 2012 09:49:40AM *  2 points [-]

I can't speak for Jewish elementary school, but surely believing PA (even when, intuitively, the result seems flatly wrong or nonsensical) would be a good example to hold up before students of mathematics? The Monty Hall problem seems like a good example of this.

Comment author: lavalamp 10 December 2012 05:06:46PM 7 points [-]

But that's just choosing the other horn of the dilemma, no? I.e., "god commands thing because they are moral."

And of course the atheist response to that is,

Oh! So you admit that there's some way of classifying actions as "moral" or "immoral" without reference to a deity? And therefore I really can be moral and yet not subscribe to your deity?

Not that anyone here didn't already know this, of course.

The wikipedia page lists some theistic responses that purport to evade both horns, but I don't recall being convinced that they were even coherent when I last looked at it.

Comment author: MixedNuts 10 December 2012 06:03:34PM 13 points [-]

It does choose a horn, but it's the other one, "things are moral because G-d commands them". It just denies the connotation that there exists a possible Counterfactual!G-d which could decide that Real!evil things are Counterfactual!good; in all possible worlds, G-d either wants the same thing or is something different mistakenly called "G-d". (Yeah, there's a possible world where we're ruled by an entity who pretends to be G-d and so we believe that we should kill babies. And there's a possible world where you're hallucinating this conversation.)

Or you could say it claims equivalence. Is this road sign a triangle because it has three sides, or does it have three sides because it is a triangle? If you pick the latter, does that mean that if triangles had four sides, the sign would change shape to have four sides? If you pick the former, does that mean that I can have three sides without being a triangle? (I don't think this one is quite fair, because we can imagine a powerful creator who wants immoral things.)

Three possible responses to the atheist response:

  • Sure. Not believing has bad consequences - you're wrong as a matter of fact, you don't get special believer rewards, you make G-d sad - but being immoral isn't necessarily one.

  • Well, you can be moral about most things, but worshiping my deity of choice is part of morality, so you can't be completely moral.

  • You could in theory, but how would you discover morality? Humans know what is moral because G-d told us (mostly in so many words, but also by hardwiring some intuitions). You can base your morality on philosophical reasoning, but your philosophy comes from social attitudes, which come from religious morality. Deviations introduced in the process are errors. All you're doing is scratching off the "made in Heaven" label from your ethics.

Comment author: Eliezer_Yudkowsky 10 December 2012 07:13:26PM 16 points [-]

Obvious further atheist reply to the denial of counterfactuals: If God's desires don't vary across possible worlds there exists a logical abstraction which only describes the structure of the desires and doesn't make mention of God, just like if multiplication-of-apples doesn't vary across possible worlds, we can strip out the apples and talk about the multiplication.

Comment author: dspeyer 10 December 2012 09:08:59PM 6 points [-]

a logical abstraction which only describes the structure of the desires and doesn't make mention of God, just like if multiplication-of-apples doesn't vary across possible worlds, we can strip out the apples and talk about the multiplication.

I think that's pretty close to what a lot of religious people actually believe in. They just like the one-syllable description.

Comment author: Alejandro1 10 December 2012 07:34:13PM *  1 point [-]

The obvious theist counter-reply is that the structure of God's desires is logically related to the essence of God, in a way that you can't have the goodness without the God nor more than God without the goodness, they are part of the same logical structure. (Aquinas: "God is by essence goodness itself")

I think this is a self-consistent metaethics as metaethics goes. The problem is that God is at the same time part of the realm of abstract logical structures like "goodness", and a concrete being that causes the world to exist, causes miracles, has desires, etc. The fault is not in the metaethics, it is in the confused metaphysics that allows for a concrete being to "exist essentially" as part of its logical structure.

ETA: of course, you could say the metaethics is self-consistent but also false, because it locates "goodness" outside ourselves (our extrapolated desires) which is where it really is. But for the Thomist I am currently emulating, "our extrapolated desires" sound a lot like "our final cause, the perfection to which we tend by our essence" and God is the ultimate final cause. The problem is again the metaphysics (in this case, using final causes without realizing they are mind projecting fallacy), not the metaethics.

Comment author: Eugine_Nier 11 December 2012 02:32:22AM 3 points [-]

The problem is that God is at the same time part of the realm of abstract logical structures like "goodness", and a concrete being that causes the world to exist, causes miracles, has desires, etc.

As I explained here, it's perfectly reasonable to describe mathematical abstractions as causes.

Comment author: DaFranker 10 December 2012 07:46:03PM 4 points [-]

My mind reduces all of this to "God = Confusion". What am I missing?

Comment author: lavalamp 10 December 2012 07:20:30PM 3 points [-]

It seems like you're claiming an identity relationship between god and morality, and I find myself very confused as to what that could possibly mean.

I mean, it's sort of like I just encountered someone claiming that "friendship" and "dolphins" are really the same thing. One or both of us must be very confused about what the labels "friendship" and/or "dolphins" signify, or what this idea of "sameness" is, or something else...

Comment author: MixedNuts 10 December 2012 07:55:43PM 6 points [-]

See Alejandro's comment. Define G-d as "that which creates morality, and also lives in the sky and has superpowers". If you insist on the view of morality as a fixed logical abstraction, that would be a set of axioms. (Modus ponens has the Buddha-nature!) Then all you have to do is settle the factual question of whether the short-tempered creator who ordered you to genocide your neighbors embodies this set of axioms. If not, well, you live in a weird hybrid universe where G-d intervened to give you some sense of morality but is weaker than whichever Cthulhu or amoral physical law made and rules your world. Sorry.

Comment author: shminux 10 December 2012 08:29:02PM 4 points [-]

Out of curiosity, why do you write G-d, not God? The original injunction against taking God's name in vain applied to the name in the old testament, which is usually mangled in the modern English as Jehovah, not to the mangled Germanic word meaning "idol".

Comment author: MixedNuts 10 December 2012 08:38:09PM *  7 points [-]

People who care about that kind of thing usually think it counts as a Name, but don't think there's anything wrong with typing it (though it's still best avoided in case someone prints out the page). Trying to write it makes me squirm horribly and if I absolutely need the whole word I'll copy-paste it. I can totally write small-g "god" though, to talk about deities in general (or as a polite cuss). I feel absolutely silly about it, I'm an atheist and I'm not even Jewish (though I do have a weird cultural-appropriatey obsession). Oh well, everyone has weird phobias.

Comment author: shminux 10 December 2012 09:41:05PM 1 point [-]

Trying to write it makes me squirm horribly and if I absolutely need the whole word I'll copy-paste it.

How interesting. Phobias are a form of alief, which makes this oddly relevant to my recent post.

Comment author: MixedNuts 10 December 2012 10:17:41PM 3 points [-]

I don't think it's quite the same. I have these sinking moments of "Whew, thank... wait, thank nothing" and "Oh please... crap, nobody's listening", but here I don't feel like I'm being disrespectful to Sky Dude (and if I cared I wouldn't call him Sky Dude). The emotion is clearly associated with the word, and doesn't go "whoops, looks like I have no referent" upon reflection.

What seems to be behind it is a feeling that if I did that, I would be practicing my religion wrong, and I like my religion. It's a jumble of things that give me an oxytocin kick, mostly consciously picked up, but it grows organically and sometimes plucks new dogma out of the environment. ("From now on Ruby Tuesday counts as religious music. Any questions?") I can't easily shed a part, it has to stop feeling sacred of its own accord.

Comment author: kodos96 20 December 2012 05:59:39AM *  0 points [-]

Thought experiment: suppose I were to tell you that every time I see you write out "G-d", I responded by writing "God", or perhaps even "YHWH", on a piece of paper, 10 times. Would that knowledge alter your behavior? How about if I instead (or additionally) spoke it aloud?

Edit: downvote explanation requested.

Comment author: MixedNuts 20 December 2012 09:54:12AM 2 points [-]

It feels exactly equivalent to telling me that every time you see me turn down licorice, you'll eat ten wheels of it. It would bother me slightly if you normally avoided taking the Name in vain (and you didn't, like, consider it a sacred duty to annoy me), but not to the point I'd change my behavior.

Which I didn't know, but makes sense in hindsight (as hindsight is wont to do); sacredness is a hobby, and I might be miffed at fellow enthusiasts Doing It Wrong, but not at people who prefer fishing or something.

Comment author: Eugine_Nier 21 December 2012 03:27:59AM 1 point [-]

1) I don't believe you.

2) I don't respond to blackmail.

Comment author: wedrifid 21 December 2012 05:38:55AM 2 points [-]

My usual response to reading 2) is to think 1).

I wonder if you really wouldn't respond to blackmail if the stakes were high and you'd actually lose something critical. "I don't respond to blackmail" usually means "I claim social dominance in this conflict".

Comment author: kodos96 21 December 2012 05:47:36AM 1 point [-]

What???!!! Are you suggesting that I'm actually planning on conducting the proposed thought experiment? Actually, physically, getting a piece of paper and writing out the words in question? I assure you, this is not the case. I don't even have any blank paper in my home - this is the 21st century after all.

This is a thought experiment I'm proposing, in order to help me better understand MixedNuts' mental model. No different from proposing a thought experiment involving dust motes and eternal torture. Are you saying that Eliezer should be punished for considering such hypothetical situations, a trillion times over?

Comment author: shminux 21 December 2012 06:41:27AM 1 point [-]

Why should s/he care about what you choose to do?

Comment author: kodos96 21 December 2012 06:42:54AM 2 points [-]

I don't know. That's why I asked.

Comment author: Nisan 11 December 2012 06:52:36AM *  0 points [-]

Oh well, everyone has weird phobias.

You can eliminate inconvenient phobias with flooding. I can personally recommend sacrilege.

EDIT: It sounds like maybe it's not just a phobia.

Comment author: Irgy 11 December 2012 04:14:49AM 1 point [-]

This is a classic case of fighting the wrong battle against theism. The classic theist defence is to define away every meaningful aspect of God, piece by piece, until the question of God's existance is about as meaningful as asking "do you believe in the axiom of choice?". Then, after you've failed to disprove their now untestable (and therefore meaningless) theory, they consider themselves victorious and get back to reading the bible. It's this part that's the weak link. The idea that the bible tells us something about God (and therefore by extension morality and truth) is a testable and debatable hypothesis, whereas God's existance can be defined away into something that is not.

People can say "morality is God's will" all they like and I'll just tell them "butterflies are schmetterlinge". It's when they say "morality is in the bible" that you can start asking some pertinent questions. To mix my metaphors, I'll start believing when someone actually physically breaks a ball into pieces and reconstructs them into two balls of the same original size, but until I really see something like that actually happen it's all just navel gazing.

Comment author: Wei_Dai 18 December 2012 05:18:08PM 7 points [-]

Here's my understanding of the post:

Consider two types of possible FAI designs. A Type 1 FAI has its values coded as a logical function from the time it's turned on, either a standard utility function, or all the information needed to run a simulation of a human that is eventually supposed to provide such a function, or something like that. A Type 2 FAI tries to learn its values from its inputs. For example it might be programmed to seek out a nearby human, scan their brain, and then try to extract a utility function from the scan, going to a controlled shutdown if it encounters any errors in this process. A human is more like a Type 1 FAI than a Type 2 FAI so it doesn't matter that there is no God/Stone Tablet out in the universe that we can extract morality from.

If this is fair, I have two objections:

  1. When humans are sufficiently young they are surely more like a Type 2 FAI than a Type 1 FAI. We're obviously not born with Frankena's list of terminal values. Maybe one can argue that an adult human is like a Type 2 FAI that has completed its value learning process and has "locked down" its utility function and won't change its values or go into shutdown even if it subsequently learns that the original brain scan was actually full of errors. But this is far from clear, to say the least.

  2. The difference between Type 1 FAI and Type 2 FAI (which is my understanding of the distinction the OP's trying to draw between "logical" and "physical") doesn't seem to get at the heart of what separates "morality" from "things that are not morality". If meta-ethics is supposed to make me less confused about morality, I just can't call this a "solution".

Comment author: Qiaochu_Yuan 22 December 2012 10:31:48AM *  3 points [-]

A Type 2 FAI gets its notion of what morality is based on properties of the physical universe, namely properties of humans in the physical universe. But even if counterfactually there were no humans in the physical universe, or even if counterfactually Omega modified the contents of all human brains in the physical universe so that they optimize for paperclips, that wouldn't change what actual-me means when actual-me says "I want an FAI to behave morally" even if it might change what counterfactual-me means when counterfactual-me says that.

Comment author: homunq 19 December 2012 10:18:03PM *  1 point [-]

Individual humans are plausibly Type 2 FAIs. But societies of evolved, intelligent beings, operating as they do within the constraints of logic and evolution, are arguably more Type 1. In the terms of Eliezer's BabyKiller/HappyHappy fic, babykilling-justice is obviously a flawed copy of real-justice, and so the babykillers could (with difficulty) grow out of babykilling, and you could perhaps raise a young happyhappy to respect babykilling, but the happyhappy society as a whole could never grow into babykilling.

Comment author: Alicorn 10 December 2012 07:05:11AM 26 points [-]

If we reprogrammed you to count paperclips instead, it wouldn't feel like different things having the same kind of motivation behind it. It wouldn't feel like doing-what's-right for a different guess about what's right. It would feel like doing-what-leads-to-paperclips.

Um, how do you know?

Comment author: chaosmosis 10 December 2012 07:09:49AM 5 points [-]

It would depend on what exactly what we reprogrammed within you, I expect.

Comment author: Alicorn 10 December 2012 07:13:00AM 5 points [-]

Exactly. I mean, you could probably make it have its own quale, but you could also make it not, and I don't see why that would be in question as long as we're postulating brain-reprogramming powers.

Comment author: Eliezer_Yudkowsky 10 December 2012 07:42:34AM 7 points [-]

Assume the subject of reprogramming is an existing human being, otherwise minimally altered by this reprogramming, i.e., we don't do anything that isn't necessary to switch their motivation to paperclips. So unless you do something gratuitiously non-minimal like moving the whole decision-action system out of the range of introspective modeling, or cutting way down on the detail level of introspective modeling, or changing the empathic architecture for modeling hypothetical selves, the new person will experience themselves as having ineffable 'qualia' associated with the motivation to produce paperclips.

The only way to make it seem to them like their motivational quales hadn't changed over time would be to mess with the encoding of their previous memories of motivation, presumably in a structure-destroying way since the stored data and their introspectively exposed surfaces will not be naturally isomorphic. If you carry out the change to paperclip-motivation in the obvious way, cognitive comparisions of the retrieved memories to current thoughts will return 'unequal ineffable quales', and if the memories are visualized in different modalities from current thoughts, 'incomparable ineffable quales'.

Doing-what-leads-to-paperclips will also be a much simpler 'quale', both from the outside perspective looking at the complexity of cognitive data, and in terms of the internal experience of complexity - unless you pack an awful lot of detail into the question of what constitutes a more preferred paperclip. Otherwise, compared to the old days when you thought about justice and fairness, introspection will show that less questioning and uncertainty is involved, and that there are fewer points of variation among the motivational thought-quales being considered.

I suppose you could put in some extra work to make the previous motivations map in cognitively comparable ways along as many joints as possible, and try to edit previous memories without destroying their structure so that they can be visualized in a least common modality with current experiences. But even if you did, memories of the previous quales for rightness-motivation would appear as different in retrospect when compared to current quales for paperclip-motivation as a memory of a 3D greyscale forest landscape vs. a current experience of a 2D red-and-green fractal, even if they're both articulated in the visual sensory modality and your modal workspace allows you to search for, focus on, and compare commonly 'experienced' shapes between them.

Comment author: Oligopsony 10 December 2012 04:45:29PM *  24 points [-]

I think you and Alicorn may be talking past each other somewhat.

Throughout my life, it seems that what I morally value has varied more than what rightness feels like - just as it seems that what I consider status-raising has changed more than what rising in status feels like, and what I find physically pleasurable has changed more than what physical pleasures feel like. It's possible that the things my whole person is optimizing for have not changed at all, that my subjective feelings are a direct reflection of this, and that my evaluation of a change of content is merely a change in my causal model of the production of the desiderata (I thought voting for Smith would lower unemployment, but now I think voting for Jones would, etc.) But it seems more plausible to me that

1) the whole me is optimizing for various things, and these things change over time,
2) and that the conscious me is getting information inputs which it can group together by family resemblance, and which can reinforce or disincentivize its behavior.

Imagine a ship which is governed by an anarchic assembly beneath board and captained by an employee of theirs whom they motivate through in-kind bonuses. So the assembly at one moment might be looking for buried treasure, which they think is in such-and-such a place, and so they send her baskets of fresh apples when she's steering in that direction and baskets of stinky rotten apples when she's steering in the wrong. For other goals (refueling, not crashing into reefs) they send her excellent or tedious movies and gorgeous or ugly cabana boys. The captain doesn't even have direct access to what the apples or whatever are motivating her to do; although she can piece it together. She might even start thinking of apples as irreducibly connected to treasure. But if the assembly decided that they wanted to look for ports of call instead of treasure, I don't see why in principle they couldn't start sending her apples in order to do so. And if they did, I think her first response would be, if she was verbally asked, that the treasure - or whatever the dubloons constituting the treasure ultimately represent in terms of the desiderata of the assembly - had moved to the ports of call. This might be a correct inference - perhaps the assembly wants the treasure for money and now they think that comes better from heading to ports of call - but it hardly seems to be a necessarily correct one.

If I met two vampires, and one said his desire to drink blood was mediated through hunger (and that he no longer felt hunger for food, or lust) and another said her desire to drink blood was mediated through lust (and that she no longer felt lust for sex, or hunger) then I do think - presuming they were both once human, experiencing lust and hunger like me - they've told me something that allows me to distinguish their experiences from one another, even though they both desire blood and not food or sex.

They may or may not be able to explain to what it is like to be a bat.

Unless I'm inserting a further layer of misunderstanding your position seems to be curiously disjunctivist. I or you or Alicorn or all of us may be making bad inferences in taking "feels like" to mean "reminds one of the sort of experience that brings to mind..." ("I feel like I got mauled by a bear," says someone not just and maybe never mauled by a bear) or "constituting an experience of" ("what an algorithm feels like from the inside") when the other is intended. This seems to be a pretty easy elision to make - consider all the philosophers who say things like "well, it feels like we have libertarian free will..."

Comment author: Alicorn 10 December 2012 07:47:09AM *  15 points [-]

This comment expands how you'd go about reprogramming someone in this way with another layer of granularity, which is certainly interesting on its own merits, but it doesn't strongly support your assertion about what it would feel like to be that someone. What makes you think this is how qualia work? Have you been performing sinister experiments in your basement? Do you have magic counterfactual-luminosity-powers?

Comment author: RobbBB 10 December 2012 07:17:17PM *  14 points [-]

I think Eliezer is simply suggesting that qualia don't in fact exist in a vacuum. Green feels the way it does partly because it's the color of chlorophyll. In a universe where plants had picked a different color for chlorophyll (melanophyll, say), with everything else (per impossibile) held constant, we would associate an at least slightly different quale with green and with black, because part of how colors feel is that they subtly remind us of the things that are most often colored that way. Similarly, part of how 'goodness' feels is that it imperceptibly reminds us of the extension of good; if that extension were dramatically different, then the feeling would (barring any radical redesigns of how associative thought works) be different too. In a universe where the smallest birds were ten feet tall, thinking about 'birdiness' would involve a different quale for the same reason.

Comment author: khafra 10 December 2012 03:53:34PM 5 points [-]

It sounds to me like you don't think the answer had anything to do with the question. But to think that, you'd pretty much have to discard both the functionalist and physicalist theories of mind, and go full dualist/neutral monist; wouldn't you?

Comment author: Eliezer_Yudkowsky 10 December 2012 07:05:36PM 1 point [-]

I think I'll go with this as my reply - "Well, imagine that you lived in a monist universe - things would pretty much have to work that way, wouldn't they?"

Comment author: Nick_Tarleton 10 December 2012 06:40:10PM *  2 points [-]

Possibly (this is total speculation) Eliezer is talking about the feeling of one's entire motivational system (or some large part of it), while you're talking about the feeling of some much narrower system that you identify as computing morality; so his conception of a Clippified human wouldn't share your terminal-ish drives to eat tasty food, be near friends, etc., and the qualia that correspond to wanting those things.

Comment author: Eliezer_Yudkowsky 10 December 2012 07:23:36PM 6 points [-]

The Clippified human categorizes foods into a similar metric of similarity - still believes that fish tastes more like steak than like chocolate - but of course is not motivated to eat except insofar as staying alive helps to make more paperclips. They have taste, but not tastiness. Actually that might make a surprisingly good metaphor for a lot of the difficulty that some people have with comprehending how Clippy can understand your pain and not care - maybe I'll try it on the other end of that Facebook conversation.

Comment author: DaFranker 10 December 2012 07:44:50PM 6 points [-]

The metaphor seems like it could lose most of its effectiveness on people who have never applied the outside view to how taste and tastiness feel from inside - they've never realized that chocolate tastes good because their brain fires "good taste" when it perceives the experience "chocolate taste". The obvious resulting cognitive dissonance (from "tastes bad for others") predictions match my observations, so I suspect this would be common among non-rationalists. If the Facebook conversation you mention is with people who haven't crossed that inferential gap yet, it might prove not that useful.

Comment author: Armok_GoB 10 December 2012 06:40:10PM *  5 points [-]

I wouldn't be all that suprised if the easiest way to get a human maximizing papperclips was to make it believe paperclips had epiphenomenal consciousnesses experiencing astronomical amounts of pleasure.

edit: or you could just give them a false memory of god telling them to do it.

Comment author: FeepingCreature 15 December 2012 01:48:34AM 3 points [-]

I wouldn't be all that suprised if the easiest way to get a human maximizing papperclips was to make it believe paperclips had epiphenomenal consciousnesses

The Enrichment Center would like to remind you that the Paperclip cannot speak. In the event that the Paperclip does speak, the Enrichment Center urges you to disregard its advice.

Comment author: Vaniver 10 December 2012 08:51:44PM 9 points [-]

Consider Bob. Bob, like most unreflective people, settles many moral questions by "am I disgusted by it?" Bob is disgusted by, among other things, feces, rotten fruit, corpses, maggots, and men kissing men. Internally, it feels to Bob like the disgust he feels at one of those stimuli is the same as the disgust he feels at the other stimuli, and brain scans show that they all activate the insula in basically the same way.

Bob goes through aversion therapy (or some other method) and eventually his insula no longer activates when he sees men kissing men.

When Bob remembers his previous reaction to that stimuli, I imagine he would remember being disgusted, but not be disgusted when he remembers the stimuli. His positions on, say, same-sex marriage or the acceptability of gay relationships have changed, and he is aware that they have changed.

Do you think this example agrees with your account? If/where it disagrees, why do you prefer your account?

Comment author: RobbBB 10 December 2012 09:06:48PM *  8 points [-]

I think this is really a sorites problem. If you change what's delicious only slightly, then deliciousness itself seems to be unaltered. But if you change it radically — say, if circuits similar to your old gustatory ones now trigger when and only when you see a bright light — then it seems plausible that the experience itself will be at least somewhat changed, because 'how things feel' is affected by our whole web of perceptual and conceptual associations. There isn't necessarily any sharp line where a change in deliciousness itself suddenly becomes perceptible; but it's nevertheless the case that the overall extension of 'delicious' (like 'disgusting' and 'moral') has some effect on how we experience deliciousness. E.g., deliciousness feels more foodish than lightish.

Comment author: Vaniver 10 December 2012 09:21:22PM 6 points [-]

it seems plausible that the experience itself will be at least somewhat changed, because 'how things feel' is affected by our whole web of perceptual and conceptual associations.

When I look at the problem introspectively, I can see that as a sensible guess. It doesn't seem like a sensible guess when I look at it from a neurological perspective. If the activation of the insula is disgust, then the claim that outputs of the insula will have a different introspective flavor when you rewire the inputs of the insula seems doubtful. Sure, it could be the case, but why?

When we hypnotize people to make them disgusted by benign things, I haven't seen any mention that the disgust has a different introspective flavor, and people seem to reason about that disgust in the exact same way that they reason about the disgust they had before.

This seems like the claim that rewiring yourself leads to something like synesthesia, and that just seems like an odd and unsupported claim to me.

Comment author: RobbBB 10 December 2012 09:56:23PM *  4 points [-]

If the activation of the insula is disgust

Certain patterns of behavior at the insula correlate with disgust. But we don't know whether they're sufficient for disgust, nor do we know which modifications within or outside of the insula change the conscious character of disgust. There are lots of problems with identity claims at this stage, so I'll just raise one: For all we know, activation patterns in a given brain region correlate with disgust because disgust is experienced when that brain region inhibits another part of the brain; an experience could consist, in context, in the absence of a certain kind of brain activity.

When we hypnotize people to make them disgusted by benign things, I haven't seen any mention that the disgust has a different introspective flavor

Hypnosis data is especially difficult to evaluate, because it isn't clear (a) how reliable people's self-reports about introspection are while under hypnosis; nor (b) how reliable people's memories-of-hypnosis are afterward. Some 'dissociative' people even give contradictory phenomenological reports while under hypnosis.

That said, if you know of any studies suggesting that the disgust doesn't have at all a different character, I'd be very interested to see them!

If you think my claim isn't modest and fairly obvious, then it might be that you aren't understanding my claim. Redness feels at least a little bit bloodish. Greenness feels at least a little bit foresty. If we made a clone who sees evergreen forests as everred and blood as green, then their experience of greenness and redness would be partly the same, but it wouldn't be completely the same, because that overtone of bloodiness would remain in the background of a variety of green experiences, and that woodsy overtone would remain in the background of a variety of red experiences.

Comment author: adamisom 11 December 2012 06:32:19PM *  3 points [-]

I just wanted to tell everyone that it is great fun to read this in the voice of that voice actor for the Enzyte commercial :)

Comment author: MugaSofer 10 December 2012 04:59:12PM *  3 points [-]

Wouldn't it be easier to have the programee remember themself as misunderstanding morality - like a reformed racist who previously preferred options that harmed minorities. I know when I gain more insight into my ethics I remember making decisions that, in retrospect, are incomprehensible (unless I deliberately keep in mind how I thought I should act.)

Comment author: Eugine_Nier 11 December 2012 02:14:47AM 1 point [-]

Wouldn't it be easier to have the programee remember themself as misunderstanding morality

That depends on the details of how the human brain stores goals and memories.

Comment author: MugaSofer 11 December 2012 09:09:35AM 1 point [-]

Cached thoughts regularly supersede actual moral thinking, like all forms of thinking, and I am capable of remembering this experience. Am I misunderstanding your comment?

Comment author: Eugine_Nier 13 December 2012 04:42:59AM 1 point [-]

My point is that in order to "fully reprogram" someone it is also necessary to clear their "moral cache" at the very least.

Comment author: MugaSofer 13 December 2012 09:06:20AM 1 point [-]

Well ... is it? Would you notice if your morals changed when you weren't looking?

Comment author: Eugine_Nier 14 December 2012 03:05:51AM 1 point [-]

I probably would, but then again I'm in the habit of comparing the out of my moral intuitions with stored earlier versions of that output.

Comment author: JoachimSchipper 13 December 2012 07:56:39AM 2 points [-]

I have no problem with this passage. But it does not seem obviously impossible to create a device that stimulates that-which-feels-rightness proportionally to (its estimate of) the clippiness of the universe - it's just a very peculiar kind of wireheading.

As you point out, it'd be obvious, on reflection, that one's sense of rightness has changed; but that doesn't necessarily make it a different qualia, any more than having your eyes opened to the suffering of (group) changes your experience of (in)justice qua (in)justice.

Comment author: Gust 03 January 2013 02:14:34PM *  1 point [-]

Although I think your point here is plausible, I don't think it fits in a post where you are talking about the logicalness of morality. This qualia problem is physical; whether your feeling changes when the structure of some part of your decision system changes depends on your implementation.

Maybe your background understanding of neurology is enough for you to be somewhat confident stating this feeling/logical-function relation for humans. But mine is not and, although I could separate your metaethical explanations from your physical claims when reading the post, I think it would be better off without the latter.

Comment author: handoflixue 10 December 2012 11:04:40PM 4 points [-]

Speaking from personal experience, I can say that he's right.

Explaining how I know this, much less sharing the experience, is more difficult.

The simplest idea I can present is that you probably have multiple utility functions. If you're buying apples, you'll evaluate whether you like that type of apple, what the quality of the apple is, and how good the price is. For me, at least, these all FEEL different - a bruised apple doesn't "feel" overpriced the way a $5 apple at the airport does. Even disliking soft apples feels very different from recognizing a bruised apple, even though they both also go in to a larger basket of "no good".

What's more, I can pick apples based on someone ELSE'S utility function, and actually often shop with my roommate's function in mind (she likes apples a lot more than me, but is also much pickier, as it happens). This feels different from using my own utility function.


The other side of this is that I would expect my brain to NOTICE it's actual goals. If my goal is to make paperclips, I will think "I should do this because it makes paperclips", instead of "I should do this because it makes people happy". My brain doesn't have a generic "I should do this" emotion, as near as I can tell - it just has ways of signalling that an activity will accomplish my goals.

Thus, it seems reasonable to conclude that my feelings are more a combination of activity + outcome, not some raw platonic ideal. While sex, hiking, and a nice meal all make me "happy", they still feel completely different - I just lump them in to a larger category of "happiness" for some reason.

I'd strongly suspect you can add make-more-paperclips to that emotional category, but I see absolutely no reason you could make me treat it the same as a nice dinner, because that wouldn't even make sense.

Comment author: Vaniver 11 December 2012 08:08:31PM 7 points [-]

Speaking from personal experience, I can say that he's right.

So, you introspect the way that he introspects. Do all humans? Would all humans need to introspect that way for it to do the work that he wants it to do?

Comment author: handoflixue 11 December 2012 09:36:46PM 5 points [-]

Ooh, good call, thank you. I suppose it might be akin to visualization, where it actually varies from person to person. Does anyone here on LessWrong have conflicting anecdotes, though? Does anyone disagree with what I said? If not, it seems like a safe generalization for now, but it's still useful to remember I'm generalizing from one example :)

Remembering that other people have genuinely alien minds is surprisingly tricky.

Comment author: Alicorn 11 December 2012 10:34:49PM 8 points [-]

The other side of this is that I would expect my brain to NOTICE it's actual goals. If my goal is to make paperclips, I will think "I should do this because it makes paperclips", instead of "I should do this because it makes people happy". My brain doesn't have a generic "I should do this" emotion, as near as I can tell - it just has ways of signalling that an activity will accomplish my goals.

Iron deficiency feels like wanting ice. For clever, verbal reasons. Not being iron deficient doesn't feel like anything. My brain did not notice that it was trying to get iron - it didn't even notice it was trying to get ice, it made up reasons according to which ice was an instrumental value for some terminal goal or other.

Comment author: shminux 11 December 2012 11:36:28PM 4 points [-]

Remembering that other people have genuinely alien minds is surprisingly tricky.

Other people? I find my own mind quite alien below the thin layer accessible to my introspection. Heck, most of the time I cannot even tell if my introspection lies to me.

Comment author: asparisi 12 December 2012 05:32:54PM 1 point [-]

I think I have a different introspection here.

When I have a feeling such as 'doing-whats-right' there is a positive emotional response associated with it. Immediately I attach semantic content to that emotion: I identify it as being produced by the 'doing-whats-right' emotion. How do I do this? I suspect that my brain has done the work to figure out that emotional response X is associated with behavior Y, and just does the work quickly.

But this is maleable. Over time, the emotional response associated with an act can change and this does not necessarily indicate a change in semantic content. I can, for example, give to a charity that I am not convinced is good and I still will often get the 'doing-whats-right' emotion even though the semantic content isn't really there. I can also find new things I value, and occasionally I will acknowledge that I value something before I get positive emotional reinforcement. So in my experience, they aren't identical.

I strongly suspect that if you reprogrammed my brain to value counting paperclips, it would feel the same as doing what is right. At very least, this would not be inconsistent. I might learn to attach paperclippy instead of good to that emotional state, but it would feel the same.

Comment author: MugaSofer 12 December 2012 11:11:05AM 1 point [-]

Remembering that other people have genuinely alien minds is surprisingly tricky.

... they do? For what values of "alien"?

Comment author: handoflixue 14 December 2012 06:59:00PM 1 point [-]

Because I'm not sure how else to capture a "scale of alien-ness":

I once wrote a sci-fi race that was a blind, deaf ooze, but extremely intelligent and very sensitive to tactile input. Over the years, and with the help of a few other people, I've gotten a fairly good feel for their mindset and how they approach the world.

There's a distinct subset of humans which I find vastly more puzzling than these guys.

Comment author: army1987 14 December 2012 10:21:10PM *  3 points [-]

From Humans in Funny Suits:

But the real problem is not shape, it is mind.  "Humans in funny suits" is a well-known term in literary science-fiction fandom, and it does not refer to something with four limbs that walks upright.  An angular creature of pure crystal is a "human in a funny suit" if she thinks remarkably like a human - especially a human of an English-speaking culture of the late-20th/early-21st century.

I don't watch a lot of ancient movies.  When I was watching the movie Psycho (1960) a few years back, I was taken aback by the cultural gap between the Americans on the screen and my America.  The buttoned-shirted characters of Psycho are considerably more alien than the vast majority of so-called "aliens" I encounter on TV or the silver screen.

Comment author: handoflixue 14 December 2012 10:30:48PM 1 point [-]

The race was explicitly designed to try and avoid "humans in funny suits", and have a culture that's probably more foreign than the 1960s. But I'm only 29, and haven't traveled outside of English-speaking countries, so take that with a dash of salt!

On a 0-10 scale, with myself at 0, humans in funny suits at 1, and the 1960s at 2, I'd rate my creation as a 4, and a subset of humanity exists in the 4-5 range. Around 5, I have trouble with the idea that there's coherent intelligent reasoning happening, because the process is just completely lost on me, and I don't think I'd be able to easily assign anything more than a 5, much less even speculate on what a 10 would look like.

Trying to give a specific answer to "how alien is it" is a lot harder than it seems! :)

Comment author: Eugine_Nier 16 December 2012 04:12:55AM 3 points [-]

The race was explicitly designed to try and avoid "humans in funny suits", and have a culture that's probably more foreign than the 1960s. But I'm only 29, and haven't traveled outside of English-speaking countries, so take that with a dash of salt!

Well reading fiction (and non-fiction) for which English speakers of your generation weren't the target audience is a good way to start compensating.

Comment author: handoflixue 17 December 2012 09:14:50PM 2 points [-]

I've got a lot of exposure to "golden age" science fiction and fantasy, so going back a few decades isn't hard for me. I just don't get exposed to many other good sources. The "classics" seem to generally fail to capture that foreignness.

If you have recommendations, especially a broader method than just naming a couple authors, I'd love to hear it. Most of my favourite authors have a strong focus on foreign cultures, either exploring them or just having characters from diverse backgrounds.

Comment author: IlyaShpitser 14 December 2012 10:36:57PM *  3 points [-]

If I may make a recommendation, if you are concerned about "alien aliens", read a few things by Stanislaw Lem. The main theme of Lem's scifi, I would say, is alien minds, and failure of first contact. "Solaris" is his most famous work (but the adaptation with Clooney is predictably terrible).

Comment author: shminux 11 December 2012 11:50:08PM 2 points [-]

The other side of this is that I would expect my brain to NOTICE it's actual goals. If my goal is to make paperclips, I will think "I should do this because it makes paperclips", instead of "I should do this because it makes people happy".

Secondary goals often feel like primary. Breathing and quenching thirst are means of achieving the primary goal of survival (and procreation), yet they themselves feel like primary. Similarly, a paperclip maximizer may feel compelled to harvest iron without any awareness that it wants to do it in order to produce paperclips.

Comment author: Nornagest 12 December 2012 12:37:06AM *  5 points [-]

Survival and procreation aren't primary goals in any direct sense. We have urges that have been selected for because they contribute to inclusive genetic fitness, but at the implementation level they don't seem to be evaluated by their contributions to some sort of unitary probability-of-survival metric; similarly, some actions that do contribute greatly to inclusive genetic fitness (like donating eggs or sperm) are quite rare in practice and go almost wholly unrewarded by our biology. Because of this architecture, we end up with situations where we sate our psychological needs at the expense of the factors that originally selected for them: witness birth control or artificial sweeteners. This is basically the same point Eliezer was making here.

It might be meaningful to treat supergoals as intentional if we were discussing an AI, since in that case there would be a unifying intent behind each fitness metric that actually gets implemented, but even in that case I'd say it's more accurate to talk about the supergoal as a property not of the AI's mind but of its implementors. Humans, of course, don't have that excuse.

Comment author: handoflixue 14 December 2012 06:53:07PM 3 points [-]

Bull! I'm quite aware of why I eat, breathe, and drink. Why in the world would a paperclip maximizer not be aware of this?

Unless you assume Paperclippers are just rock-bottom stupid I'd also expect them to eventually notice the correlation between mining iron, smelting it, and shaping it in to a weird semi-spiral design... and the sudden rise in the number of paperclips in the world.

Comment author: shminux 14 December 2012 07:36:16PM *  1 point [-]

I'm not sure that awareness is needed for paperclip maximizing. For example, one might call fire a very good CO2 maximizer. Actually, I'm not even sure you can apply the word awareness to non-human-like optimizers.

Comment author: torekp 16 December 2012 12:15:29AM *  6 points [-]

Mainstream status:

EY's position seems to be highly similar to Frank Jackson's analytic descriptivism, which holds that

Frank Jackson and Philip Pettit (1995). According to their view of “analytic moral functionalism,” moral properties are reducible to “whatever plays their role in mature folk morality.” Jackson’s (1998) refinement of this position—which he calls “analytic descriptivism”—elaborates that the “mature folk” properties to which moral properties are reducible will be “descriptive predicates”

Which is a position neither popular nor particularly unpopular, but simply one of many contenders, as the mainstream goes.

Comment author: BerryPick6 16 December 2012 12:29:36AM *  2 points [-]

This similarity has been noted and discussed before. See http://lesswrong.com/lw/fgz/empirical_claims_preference_claims_and_attitude/7u3s

Comment author: Eliezer_Yudkowsky 16 December 2012 09:05:49PM 2 points [-]

I confirm (as I have previously) that Frank Jackson's work seems to me like the nearest known point in academic philosophy.

Comment author: nshepperd 10 December 2012 06:50:04AM *  6 points [-]

Well, I'm glad to see you're taking a second crack at an exposition of metaethics.

I wonder if it might be worth expounding more on the distinction between utterances (sentences and word-symbols), meaning-bearers (propositions and predicates) and languages (which map utterances to meaning-bearers). My limited experience seems to suggest that a lot of the confusion about metaethics comes from not getting, instinctively, that speakers use their actual language, and that a sentence like "X is better than Y", when uttered by a particular person, refers to some fixed proposition about X and Y that doesn't talk about the definition of the symbols "X", "Y" and "better" in the speaker's language (and for that matter doesn't talk about the definitions of "is" and "than").¹

But I don't really know. I find it hard to get into people's heads in this case.

¹ In general. It is of course, possible that in some speaker's language "X" refers to something like the english language and "Y" refers to french, or that "better" refers to having more words for snow. But in general most things we say are not about language.

Comment author: JMiller 10 December 2012 06:35:07AM 6 points [-]

I am having difficulty understanding the model of 'physics+logic = reality.' Up until now I have understood that's physics was reality, but logic is the way to describe and think about what follows from it. Would someone please post a link to the original article (in this sequence or not) which explains the position? Thank you.

Comment author: Eliezer_Yudkowsky 10 December 2012 06:41:19AM 10 points [-]
Comment author: JMiller 10 December 2012 06:43:56AM 2 points [-]

Thank you.

Comment author: Manfred 10 December 2012 03:24:20PM 12 points [-]

Yay, I think we've finished the prerequisites to prerequisites, and started the prerequisites!

Comment author: Error 12 December 2012 04:28:49PM 4 points [-]

I love the word "Unclipperific."

I follow the argument here, but I'm still mulling over it and I think by the time I figure out whether I agree the conversation will be over. Something disconcerting struck me on reading it, though: I think I could only follow it having already read and understood the Metaethics sequence. (at least, I think I understood it correctly; at least one commenter confirmed the point that gave me the most trouble at the time)

While I was absorbing the Sequences, I found I could understand most posts on their own, and I read many of them out of order without much difficulty. But without that extensive context I think this post would read like Hegel. If this was important to some argument I was having, and I referenced it, I wouldn't expect my opponent (assuming above-average intelligence) to follow it well enough to distinguish it from complicated but meaningless drivel. You might consider that a problem with the writing if not the argument.

Evidence search: is there anyone here who hasn't read Metaethics but still understood Eliezer's point as Eliezer understands it?

Comment author: MaoShan 13 December 2012 03:12:03AM 3 points [-]

I had almost exactly the same feeling as I was reading it. My thought was, "I'm sure glad I'm fluent in LessWrongese, otherwise I wouldn't have a damn clue what was going on." It would be like an exoteric Christian trying to read Valentinus. It's a great post, I'm glad we have it here, I am just agreeing that the terminology has a lot of Sequences and Main prerequisites.

Comment author: Bruno_Coelho 15 December 2012 01:07:08PM 1 point [-]

That's something: posts presuppose too much. Words are hidden inferences, but most newbies don't know where to begin or if this is worth a try. For example, this sequence has causality as a topic to understand the universe, but people need to know a lot before eat the cake(probability, math logic, some Pearl and the sequences).

Comment author: johnswentworth 11 December 2012 07:33:41AM 4 points [-]

I still feel confused. I definitely see that, when we talk about fairness, our intended meaning is logical in nature. So, if I claim that it is fair for each person to get an equal share of pie, I'm trying to talk about some set of axioms and facts derived from them. Trying.

The problem is, I'm not convinced that the underlying cognitive algorithms are stable enough for those axioms to be useful. Imagine, for example, a two-year-old with the usual attention span. What they consider "good" might vary quite quickly. What I consider "just" probably depends on how recently I ate. Even beyond such simple time dependence, what I consider "just" will definitely depend on context, framing, and how you measure my opinion (just ask a question? Show me a short film? Simulate the experience and see if I intervene?). Part of why friendly AI is so hard is that humans aren't just complicated, we're not even consistent. How, then, can we axiomatize a real human's idea of "justice" in a useful way?

Comment author: Vaniver 10 December 2012 06:44:26AM *  16 points [-]

I read this post with a growing sense of unease. The pie example appears to treat "fair" as a 1-place word, but I don't see any reason to suppose it would be. (I note my disquiet that we are both linking to that article; and my worry about how confused this post seems to me.)

The standard atheist reply is tremendously unsatisfying; it appeals to intuition and assumes what it's trying to prove!

My resolution of Euthryphro is "the moral is the practical." A predictable consequence of evolution is that people have moral intuitions, that those intuitions reflect their ancestral environment, and that those intuitions can be variable. Where would I find mercy, justice, or duty? Cognitive algorithms and concepts inside minds.

This article reads like you're trying to move your stone tablet from your head into the world of logic, where it can be as universal as the concept of primes. It's not clear to me why you're embarking on that particular project.

The example of elegance seems like it points the other way. If your sense of elegance is admittedly subjective, why are we supposing a Platonic form of elegance out in the world of logic? Isn't this basically the error where one takes a cognitive algorithm that recognizes whether or not something is a horse and turns it into a Platonic form of horseness floating in the world of logic?

It looks to me like you're trying to say "because classification algorithms can be implemented in reality, there can be real ensembles that embody logical facts, but changing the classification algorithms doesn't change those logical facts," which seems true but I don't see what work you expect it to do.

There's also the statement "when you change the algorithms that lead to outputs, you change the internal sensation of those outputs." That has not been my experience, and I don't see a reason why that would be the case. In particular, when dreaming it seems like many algorithms have their outputs fixed at certain values: my 'is this exciting?' algorithm may return 'exciting!' during the dream but 'boring!' when considering the dream whilst awake, but the sensation that results from the output of the algorithm seems indistinguishable; that is, being excited in a dream feels the same to me as being excited while awake. (Of course, it could be that whichever part of me is able to differentiate between sensations is also malfunctioning while dreaming!)

I could write out an exact description of your visual cortex's spiking code for 'blue' on paper, and it wouldn't actually look blue to you.

If you show me the pattern of neurons firing that happens when my bladder is full, then my bladder won't feel full. If you put an electrode in my head (or use induction, or whatever) and replicate that pattern of neurons firing, then my bladder will feel full, because the feeling of fullness is the output of those neurons firing in that pattern.

In the same sense, when you try to do what's right, you're motivated by things like (to yet again quote Frankena's list of terminal values):

You sure it's not just executing an adaption? Why?

Comment author: RobbBB 10 December 2012 11:42:40PM *  9 points [-]

The pie example appears to treat "fair" as a 1-place word

'Beautiful' needs 2 places because our concept of beauty admits of perceptual variation. 'Fairness' does not grammatically need an 'according to whom?' argument place, because our concept of fairness is not observer-relative. You could introduce a function that takes in a person X who associates a definition with 'fairness,' takes in a situation Y, and asks whether X would call Y 'fair.' But this would be a function for 'What does the spoken syllable FAIR denote in a linguistic community?', not a function for 'What is fair?' If we applied this demand generally, 'beautiful' would became 3-place ('what objects X would some agent Y say some agent Z finds 'beautiful'?'), as would logical terms like 'plus' ('how would some agent X perform the operation X calls "addition" on values Y and Z?'), and indeed all linguistic acts.

intuitions reflect their ancestral environment, and [...] those intuitions can be variable.

Yes, but a given intuition cannot vary limitlessly, because there are limits to what we would consider to fall under the same idea of 'fairness.' Different people may use the spoken syllables FAIR, PLUS, or BEAUTIFUL differently, but past a certain point we rightly intuit that the intension of the words, and not just their extension, has radically changed. Thus even if 'fairness' is disjunctive across several equally good concepts of fairness, there are semantic rules for what gets to be in the club. Plausibly, 'fairness is whatever makes RobbBB happiest' is not a semantic candidate for what English-speakers are logically pinpointing as 'fairness.'

This article reads like you're trying to move your stone tablet from your head into the world of logic, where it can be as universal as the concept of primes.

You hear 'Oh no, he's making morality just as objective as number theory!' whereas I hear 'Oh good, he's making morality just as subjective as number theory.' If we can logically pinpoint 'fairness,' then fairness can be rigorously and objectively discussed even if some species find the concept loathsome; just as if we can logically pinpoint 'prime number,' we can rigorously and objectively discuss the primes even with a species S who finds it unnatural to group 2 with the other primes, and a second species S* who finds it unnatural to exclude 1 from their version of the primes. Our choice of whether to consider 2 prime, like our choice of which semantic value to assign to 'fair,' is both arbitrary and unimpeachably objective.

Or do you think that number theory is literally writ into the fabric of reality somewhere, that Plato's Heaven is actually out there and that we therefore have to be very careful about which logical constructs we allow into the club? This reluctance to let fairness into an elite Abstraction Club, even if some moral codes are just as definable in logical terms as is number theory, reminds me of Plato's neurotic reluctance (in the Parmenides) to allow for the possibility that there might be Forms "of hair, mud, dirt, or anything else which is vile and paltry." Constructible is constructible; there is not a privileged set of Real Constructs distinct from the Mere Fictions, and the truths about Sherlock Holmes, if defined carefully enough, get the same epistemic and metaphysical status as the truths about Graham's Number.

If your sense of elegance is admittedly subjective, why are we supposing a Platonic form of elegance out in the world of logic?

You're confusing epistemic subjectivity with ontological subjectivity. Terms that are defined via or refer to mind- or brain-states may nevertheless be defined with so much rigor that they admit no indeterminacy, i.e., an algorithm could take in the rules for certain sentences about subjectivity and output exactly which cases render those sentences true, and which render them false.

Isn't this basically the error where one takes a cognitive algorithm that recognizes whether or not something is a horse and turns it into a Platonic form of horseness floating in the world of logic?

What makes you think that the 'world of logic' is Platonic in the first place? If logic is a matter of mental construction, not a matter of us looking into our metaphysical crystal balls and glimpsing an otherworldly domain of Magical Nonspatiotemporal Thingies, then we cease to be tempted by Forms of Horsehood for the same reason we cease to be tempted by Forms of Integerhood.

Comment author: Vaniver 11 December 2012 12:51:53AM 1 point [-]

'Beautiful' needs 2 places because our concept of beauty admits of perceptual variation. 'Fairness' does not grammatically need an 'according to whom?' argument place, because our concept of fairness is not observer-relative.

What? It seems to me that fairness and beauty are equally subjective, and the intuition that says "but my sense of fairness is objectively correct!" is the same intuition that says "but my sense of beauty is objectively correct!"

If we can logically pinpoint 'fairness,' then fairness can be rigorously and objectively discussed even if some species find the concept loathsome

I agree that we can logically pinpoint any specific concept; to use the pie example, Yancy uses the concept of "splitting windfalls equally by weight" and Zaire uses the concept of "splitting windfalls equally by desire." What I disagree with is the proposition that there is this well-defined and objective concept of "fair" that, in the given situation, points to "splitting windfalls equally by weight."

One could put forward the axiom that "splitting windfalls equally by weight is fair", just like one can put forward the axiom that "zero is not the successor of any number," but we are no closer to that axiom having any decision-making weight; it is just a model, and for it to be used it needs to be a useful and appropriate model.

Comment author: RobbBB 11 December 2012 01:49:41AM *  1 point [-]

What? It seems to me that fairness and beauty are equally subjective

I don't know what you mean by 'subjective.' But perhaps there is a (completely non-denoting) concept of Objective Beauty in addition to the Subjective Beauty ('in the eye of the beholder') I'm discussing, and we're talking past each other about the two. So let's pick a simpler example.

'Delicious' is clearly two-place, and ordinary English-language speakers routinely consider it two-place; we sometimes elide the 'delicious for whom?' by assuming 'for ordinary humans,' but it would be controversial to claim that speaking of deliciousness automatically commits you to a context-independent property of Intrinsic Objective Tastiness.

Now, it sounded like you were claiming that fairness is subjective in much the same way as deliciousness; no claim about fairness is saturated unless it includes an argument place for the evaluater. But this seems to be false simply given how people conceive of 'fair' and 'delicious'. People don't think there's an implicit 'fairness-relative-to-a-judge-thereof' when we speak of 'fairness,' or at least it don't think it in the transparent way they think of 'deliciousness' as always being 'deliciousness-relative-to-a-taster.' ('Beauty,' perhaps, is an ambiguous case straddling these two categories.) So is there some different sense in which fairness is 'subjective'? What is this other sense?

What I disagree with is the proposition that there is this well-defined and objective concept of "fair" that, in the given situation, points to "splitting windfalls equally by weight."

Are you claiming that Eliezer lacks any well-defined concept he's calling 'fairness'? Or are you claiming that most English-speakers don't have Eliezer's well-defined fairness in mind when they themselves use the word 'fair,' thus making Eliezer guilty of equivocation?

People argue about how best to define a term all the time, but we don't generally conclude from this that any reasoning one proceeds to carry out once one has stipulated a definition for the controversial term is for that reason alone 'subjective.' There have been a number of controversies in the history of mathematics — places where people's intuitions simply could not be reconciled by any substantive argument or proof — and mathematicians responded by stipulating precisely what they meant by their terms, then continuing on from there. Are you suggesting that this same method stops being useful or respectable if we switch domains from reasoning about this thing we call 'quantity' to reasoning about this thing we call 'fairness'?

we are no closer to that axiom having any decision-making weight

What would it mean for an axiom to have "decision-making weight"? And do you think Eliezer, or any other intellectually serious moral realist, is honestly trying to attain this "decision-making weight" property?

Comment author: Vaniver 11 December 2012 04:41:04AM 5 points [-]

I don't know what you mean by 'subjective.'

That the judgments of "fair" or "beautiful" don't come from a universal source, but from a particular entity. I have copious evidence that what I consider "beautiful" is different from what some other people consider "beautiful;" I have copious evidence that what I consider "fair" is different from what some other people consider "fair."

'Delicious' is clearly two-place, and ordinary English-language speakers routinely consider it two-place;

It is clear to me that delicious is two-place, but it seems to me that people have to learn that it is two-place, and evidence that it is two-place is often surprising and potentially disgusting. Someone who has not learned through proverbs and experience that "beauty is in the eye of the beholder" and "there's no accounting for taste" would expect that everyone thinks the same things are beautiful and tasty.

But this seems to be false simply given how people conceive of 'fair' and 'delicious'.

There are several asymmetries between them. Deliciousness generally affects one person, and knowing that it varies allows specialization and gains from trade (my apple for your banana!). Fairness generally requires at least two people to be involved, and acknowledging that your concept of fairness does not bind the other person puts you at a disadvantage. Compare Xannon's compromise to Yancy's hardlining.

People thinking that something is objective is not evidence that it is actually objective. Indeed, we have plenty of counterevidence in all the times that people argue over what is fair.

Are you claiming that Eliezer lacks any well-defined concept he's calling 'fairness'?

No? I'm arguing that Eliezer::Fair may be well-defined, but that he has put forward no reason that will convince Zaire that Zaire::Fair should become Eliezer::Fair, just like he has put forward no reason why Zaire::Favorite Color should become Eliezer::Favorite Color.

Are you suggesting that this same method stops being useful or respectable if we switch domains from reasoning about this thing we call 'quantity' to reasoning about this thing we call 'fairness'?

There are lots of possible geometries out there, and mathematicians can productively discuss any set of non-contradictory axioms. But only a narrow subset of those geometries correspond well with the universe that we actually live in; physicists put serious effort into understanding those, and the rest are curiosities.

(I think that also answers your last two questions, but if it doesn't I'll try to elaborate.)

Comment author: Peterdjones 11 December 2012 11:20:10AM 3 points [-]

I have copious evidence that what I consider "beautiful" is different from what some other people consider "beautiful;" I have copious evidence that what I consider "fair" is different from what some other people consider "fair."

But there is little upshot to people having differnt notions of beauty, since people can arrange their own environents to suit their own aesthetics. However, resources have to be apportioned one way or another. So we need, and have discussion about how to do things fairly. (Public architecture is a bit of an exception to what I said about beauty, but lo and behold, we have debates at that too).

Comment author: RobbBB 11 December 2012 08:41:56AM *  2 points [-]

the judgments of "fair" or "beautiful" don't come from a universal source, but from a particular entity.

I don't understand what this means. To my knowledge, the only things that exist are particulars.

I have copious evidence that what I consider "beautiful" is different from what some other people consider "beautiful;" I have copious evidence that what I consider "fair" is different from what some other people consider "fair."

I have copious evidence that others disagree with me about ¬¬P being equivalent to P. And I have copious evidence that others disagree with me about the Earth's being more than 6,000 years old. Does this imply that my belief in Double Negation Elimination and in the Earth's antiquity is 'subjective'? If not, then what extra premises are you suppressing?

It is clear to me that delicious is two-place, but it seems to me that people have to learn that it is two-place

Well, sure. But, barring innate knowledge, people have to learn everything at some point. 3-year-olds lack a theory of mind; and those with a new theory of mind may not yet understand that 'beautiful' and 'delicious' are observer-relative. But that on its own gives us no way to conclude that 'fairness' is observer-relative. After all, not everything that we start off thinking is 'objective' later turns out to be 'subjective.'

And even if 'fairness' were observer-relative, there have to be constraints on what can qualify as 'fairness.' Fairness is not equivalent to 'whatever anyone decides to use the word "fairness" to mean,' as Eliezer rightly pointed out. Even relativists don't tend to think that 'purple toaster' and 'equitable distribution of resources' are equally legitimate and plausible semantic candidates for the word 'fairness.'

Deliciousness generally affects one person

That's not true. Deliciousness, like fairness, affects everyone. For instance, my roommate is affected by which foods I find delicious; it changes where she ends up going to eat.

Perhaps you meant something else. You'll have to be much more precise. The entire game when it comes to as tricky a dichotomy as 'objective/subjective' is just: Be precise. The dichotomy will reveal its secrets and deceptions only if we taboo our way into its heart.

and knowing that it varies allows specialization and gains from trade (my apple for your banana!).

What's fair varies from person to person too, because different people, for instance, put different amounts of work into their activities. And knowing about what's fair can certainly help in trade!

acknowledging that your concept of fairness does not bind the other person puts you at a disadvantage

Does not "bind" the other person? Fairness is not a physical object; it cannot bind people's limbs. If you mean something else by 'bind,' please be more explicit.

Eliezer::Fair may be well-defined, but that he has put forward no reason that will convince Zaire that Zaire::Fair should become Eliezer::Fair

What would it mean for Zaire::Fair to become Eliezer::Fair? Are you saying that Eliezer's fairness is 'subjective' because he can't give a deductive argument from the empty set of assumptions proving that Zaire should redefine his word 'fair' to mean what Eliezer means by 'fair'? Or are you saying that Eliezer's fairness is 'subjective' because he can't give a deductive argument from the empty set of assumptions proving that Zaire should pretend that Zaire's semantic value for the word 'fair' is the same as Eliezer's semantic value for the word 'fair'? Or what? By any of these standards, there are no objective truths; all truths rely on fixing a semantic value for your linguistic atoms, and no argument can be given for any particular fixation.

There are lots of possible geometries out there, and mathematicians can productively discuss any set of non-contradictory axioms.

They can also productively discuss sets of contradictory axioms, especially if their logic be paraconsistent.

But only a narrow subset of those geometries correspond well with the universe that we actually live in; physicists put serious effort into understanding those, and the rest are curiosities.

So, since we don't live in Euclidean space, Euclidean geometry is merely a 'curiosity.' Is it, then, subjective? If not, what ingredient, what elemental objectivium, distinguishes Euclidean geometry from Yudkowskian fairness?

Comment author: nshepperd 11 December 2012 01:47:51AM 1 point [-]

What I disagree with is the proposition that there is this well-defined and objective concept of "fair" that, in the given situation, points to "splitting windfalls equally by weight."

"Fair", quoted, is a word. You don't think it's plausible that in English "fair" could refer to splitting windfalls equally by weight? (Or rather to something a bit more complicated that comes out to splitting windfalls equally by weight in the situation of the three travellers and the pie.)

Comment author: Vaniver 11 December 2012 05:12:41AM 1 point [-]

I agree that someone could mean "splitting windfalls equally by weight" when they say "fair." I further submit that words can be ambiguous, and someone else could mean "splitting windfalls equally by desire" when they say "fair." In such a case, where the word seems to adding more heat than light, I would scrap it and go with the more precise phrases.

Comment author: army1987 12 December 2012 01:27:41PM *  1 point [-]

'Fairness' does not grammatically need an 'according to whom?' argument place

Grammatically, neither does “beautiful”. “Alice is beautiful” is a perfectly grammatical English sentence.

Comment author: Peterdjones 10 December 2012 01:15:48PM 4 points [-]

My resolution of Euthryphro is "the moral is the practical."

How do you avoid prudent predation

Comment author: dspeyer 10 December 2012 09:25:01PM 2 points [-]

I think the author of that piece needs to learn the concept of precommitment. Precommitting to one-box is not at all the same as believing that one-boxing is the dominant strategy in the general newcomb problem. Likewise, precommitting not to engage in prudent predation is not a matter of holding a counterfactual belief, but of taking a positive-expected-utility action.

Comment author: nshepperd 10 December 2012 03:28:36PM *  3 points [-]

You sure it's not just executing an adaption? Why?

It is exactly executing an adaption. No "just" about it though. An AI programmed to maximise paperclips is motivated by increasing the number of paperclips. It's executing its program.

Comment author: Vaniver 10 December 2012 09:23:24PM *  1 point [-]

I had this post in mind. I see no reason to link behavior that 'seems moral' to the internal sensation of motivation by those terminal values, and if we're not talking about introspection about decision-making, then why are we using the word motivation?

This post seems to be discussing a particular brand of moral reasoning- basically, deliberative utilitarian judgments- which seems like a rather incomplete picture of human morality as a whole, and it seems like it's just sweeping under the rug the problem of where values come from in the first place. I should make clear that first he has to describe what values are before he can describe where values come from, but if it's an incomplete description of values, that can cause problems down the line.

Comment author: PaulWright 09 January 2013 02:15:50PM *  3 points [-]

Note that there's some discussion on just what Eliezer means by "logic all the way down" over on Rationally Speaking: http://rationallyspeaking.blogspot.co.uk/2013/01/lesswrong-on-morality-and-logic.html . Seeing as much of this is me and Angra Maiynu arguing that Massimo Pigliucci hasn't understood what Eliezer means, it might be useful for Eliezer to confirm what he does mean.

Comment author: Pentashagon 17 December 2012 06:47:42PM 3 points [-]

If we reprogrammed you to count paperclips instead, it wouldn't feel like different things having the same kind of motivation behind it. It wouldn't feel like doing-what's-right for a different guess about what's right. It would feel like doing-what-leads-to-paperclips.

What if we also changed the subject into a sentient paperclip? Any "standard" paperclip maximizer has to deal with the annoying fact that it is tying up useful matter in a non-paperclip form that it really wants to turn into paperclips. Humans don't usually struggle with the desire to replace the self with something completely different. It's inefficient. An AI primarily designed to benefit humanity (friendly or not) is going to notice that inefficiency in its goals as well. It will feel less moral from the inside than we do. I'm not sure what to do about this, or if it matters.

Comment author: [deleted] 11 December 2012 04:07:06AM *  5 points [-]

Great post! I agree with your analysis of moral semantics.

However, the question of moral ontology remains...do objective moral values exist? Is there anything I (or anyone) should do, independent from what I desire? With such a clear explanation of moral semantics at hand, I think the answer is an obvious and resounding no. Why would we even think that this is the case? One conclusion we can draw from this post is that telling an unfriendly AI that what it's doing is "wrong" won't affect its behavior. Because that which is "wrong" might be exactly that which is "moreclippy"! I feel that Eliezer probably agrees with me, here, since I gained I lot of insight into the issue from reading Three Worlds Collide.

Asking why we value that which is "right" is a scientific question, with a scientific answer. Our values are what they are, now, though, so, minus the semantics, doesn't morality just reduce to decision theory?

Comment author: Peterdjones 11 December 2012 11:25:52AM -3 points [-]

do objective moral values exist? Is there anything I (or anyone) should do, independent from what I desire? With such a clear explanation of moral semantics at hand, I think the answer is an obvious and resounding no

You just jumped to the conclusion that there is no epistemically objective morality --- nothing you objectively-should do -- because there in no metaphysically objective morality, no Form of the Good. That is a fallcy (although a common one on LW). EY has in fact explained how morality can be epistemically objective: it can be based on logic.

Comment author: [deleted] 11 December 2012 03:42:59PM 0 points [-]

I didn't say that. Of course there is something you should do, given a set of goals...hence decision theory.

Comment author: Peterdjones 11 December 2012 03:46:53PM 1 point [-]

There is something you self centerdly should do, but that doens't mean there is nothing you morally-should do either.

Comment author: [deleted] 11 December 2012 03:59:54PM 3 points [-]

According to Eliezer's definition of "should" in this post, I "should" do things which lead to "life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience..." But unless I already cared about those things, I don't see why I would do what I "should" do, so as a universal prescription for action, this definition of "morality" fails.

Comment author: nshepperd 12 December 2012 07:11:06AM *  6 points [-]

Correct. Agents who don't care about morality generally can't be convinced to do what they morally should do.

Comment author: Peterdjones 11 December 2012 04:14:51PM 0 points [-]

He also said:

"And I mention this in hopes that I can show that it is not moral anti-realism to say that moral statements take their truth-value from logical entities.". If you do care about reason, you can therefore be reasoned into morality.

In any case, it is no argument against moral objectivism/realism that some people don;'t "get" it. Maths sets up universal truths, which can be recognised by those capable of recognising them. That some don;t recognise them doesn;t stop them being objective.

Comment author: [deleted] 11 December 2012 05:12:29PM 2 points [-]

You do not reason with evil. You condemn it.

I subscribe to desirism. So I'm not a strict anti-realist.

Comment author: Peterdjones 11 December 2012 05:45:30PM *  0 points [-]

"Anyone can be reasoned into doing that which would fulfill the most and strongest of current desires. However, what fulfills current desires is not necessarily the same thing as what is right."

You seem to be overlooking the desire to be (seen to be) reasonable in itself.

"Anyone can be reasoned into doing what is right with enough argumentation”

...is probably false. But if reasoning and condemnation both modify bechaviour, however imperfectly, why not use both?

I subscribe to desirism

How does that differ from virtue ethics?

Comment author: Peterdjones 10 December 2012 10:07:30PM *  5 points [-]

I don't think there is clear route from "we can figure out morality ourselves" to "we can stop telling lies to children". The problem is that once you know morality is in a sense man-made, it becomes tempting to remake it self-servingly. I think we tell ourselves stories that fundamental morality comes from God Or Nature to restrain ourselves, and partly forget its man made nature. Men are not created equal, but it we believe they are, we behave better. "Created equal" is a value masquerading as a fact.

Comment author: Viliam_Bur 16 December 2012 10:53:19AM 1 point [-]

I think the real temptation is in reusing the old words for new concepts, either in confusion, or trying to shift the associations from the old concept to the new concept.

Once you know that natural numbers are in a sense mad-made, it could become tempting to start using the phrase "natural numbers" to include fractions. Why not? If there is no God telling us what the "natural numbers" are, why should your definition that excludes fractions be better than my definition that includes them?

Your only objection in this case would be -- Man, you are obviously talking about something different, so it would be less confusing and more polite, if you picked some new label (such as "rational numbers") for you new concept.

Comment author: Peterdjones 16 December 2012 07:30:09PM 1 point [-]

How does that relate to morality?

Comment author: Viliam_Bur 16 December 2012 08:18:26PM 1 point [-]

I would translate this:

The problem is that once you know morality is in a sense man-made, it becomes tempting to remake it self-servingly.

as: "...it becomes tempting to use some other M instead of morality."

It expresses the same idea, without the confusion about whether morality can be redefined arbitrarily. (Yes, anything can be redefined arbitrarily. It just stops being the original thing.)

Comment author: Peterdjones 17 December 2012 04:37:49PM 1 point [-]

"some other M" will still count as morality for many purposes, because self-serving ideas ("be loyal to the Geniralissimo", "obey your husband") are transmitted thorugh the same memetic channels are genuine morality. Morality is already blurred with disgust reactions and tribal shibboleths.

Comment author: The_Duck 11 December 2012 07:01:11AM *  6 points [-]

I think your discussions of metaethics might be improved by rigorously avoiding words like "fair," "right," "better," "moral," "good," etc. I like the idea that "fair" points to a logical algorithm whose properties we can discuss objectively, but when you insist on using the word "fair," and no other word, as your pointer to this algorithm, people inevitably get confused. It seems like you are insisting that words have objective meanings, or that your morality is universally compelling, or something. You can and do explicitly deny these, but when you continue to rely exclusively on the word "fair" as if there is only one concept that that word can possibly point to, it's not clear what your alternative is.

Whereas if you use different symbols as pointers to your algorithms, the message (as I understand it) becomes much clearer. Translate something like:

Fair is dividing up food equally. Now, is dividing up the pie equally objectively fair? Yes: someone who wants to divide up the pie differently is talking about something other than fairness. So the assertion "dividing the pie equally is fair" is objectively true.

into

Define XYZZY as the algorithm "divide up food equally." Now, is dividing up the pie equally objectively XYZZY? Of course it is: that's a direct logical consequence of how I just defined XYZZY. Someone who wants to divide the pie differently is using an algorithm that is not XYZZY. The assertion "dividing up the pie equally is XYZZY" is as objective as the assertion "S0+S0=SS0"--someone who rejects the latter is not doing Peano arithmetic. By the way, when I personally say the word "fair," I mean "XYZZY."

I suspect that wording things like this has less potential to trip people up: it's much easier to reason logically about XYZZY than about fairness, even if both words are supposed to be pointers to the same concept.

Comment author: Jay_Schweikert 11 December 2012 05:46:17PM 9 points [-]

I don't think this works, because "fairness" is not defined as "divide up food equally" (or even "divide up resources equally"). It is the algorithm that, among other things, leads to dividing up the pie equally in the circumstances described in the original post -- i.e., "three people exactly simultaneously spot a pie which has been exogenously generated in unclaimed territory." But once you start tampering with these conditions -- suppose that one of them owned the land, or one of them baked the pie, or two were well-fed and one was on the brink of starvation, etc. -- it would at least be controversial to say "duh, divide equally, that's just what 'fairness' means." And the fact of that controversy suggests most of are using "fairness" to point to an algorithm more complicated than "divide up resources equally."

More generally, fairness -- like morality itself -- is complicated. There are basic shared intuitions, but there's no easy formula for popping out answers to "fair: yes or no?" in intricate scenarios. So there's actually quite a bit of value in using words like "fair," "right," "better," "moral," "good," etc., instead of more concrete, less controversial concepts like "equal division," -- if you can show that even those broad, complicated concepts can be derived from physics+logic, then it's that much more of an accomplishment, and that much more valuable for long-term rationalist/reductionist/transhumanist/friendly-ai-ist/whatever goals.

At least, that's how I under this component of Eliezer's project, but I welcome correction if he or others think I'm misstating something.

Comment author: The_Duck 11 December 2012 10:30:56PM 2 points [-]

I don't think this works, because "fairness" is not defined as "divide up food equally" (or even "divide up resources equally"). It is the algorithm that, among other things, leads to dividing up the pie equally in the circumstances described in the original post

Yes; I meant for the phrase "divide up food equally" to be shorthand for something more correct but less compact, like "a complicated algorithm whose rough outline includes parts like, '...When a group of people are dividing up resources, divide them according to the following weighted combination of need, ownership, equality, who discovered the resources first, ...'"

Comment author: [deleted] 11 December 2012 05:59:41PM 2 points [-]

I think your discussions of metaethics might be improved by rigorously avoiding words like "fair," "right," "better," "moral," "good," etc.

See lukeprog's Pluralistic Moral Reductionism.

Comment author: kilobug 10 December 2012 12:50:17PM 6 points [-]

I myself would say unhesitatingly that a third of the pie each, is fair.

That's the default with no additional data, but I would hesitate, because to me how much each of the persons need the pie is also important in defining "fairness". If one of the three is starving while the others two are well-fed, it would be fair to give more to the one starving.

It may be just nitpicking, but since you took the point to ensure there is no difference between the three characters are involved in spotting the pie, but not mentioned they have the same need of it, it may pinpoint a deeper difference between different conceptions of "fairness" (should give them two different names ?)

Comment author: RichardKennaway 11 December 2012 12:50:51PM 2 points [-]

Having settled the meta-ethics, will you have anything to say about the ethics? Concrete theorems, with proofs, about how we should live?

Comment author: PeterisP 19 December 2012 11:52:04AM 2 points [-]

I'm afraid that any nontrivial metaethics cannot result in concrete universal ethics - that the context would still be individual and the resulting "how RichardKennaway should live" ethics wouldn't exactly equal "how PeterisP should live".

The difference would hopefully be much smaller than the difference between "how RichardKennaway should live RichardKennaway-justly" and "How Clippy should maximize paperclips", but still.

Comment author: RichardKennaway 19 December 2012 12:20:03PM 1 point [-]

Ok, I'll settle for concrete theorems, with proofs, about how some particular individual should live. Or ways of discovering facts about how they should live.

And presumably the concept of Coherent Extrapolated Volition requires some way of combining such facts about multiple individuals.

Comment author: ArisKatsaris 11 December 2012 01:43:10PM *  1 point [-]

To derive an ethic from a metaethic, I think you need to plug in a parameter that describes the entire context of human existence. Metaethic(Context) -> Ethic

So I don't know what you expect such a "theorem" and such "proofs" to look like, without containing several volumes descriptive in symbolic form of the human context.

Comment author: RichardKennaway 11 December 2012 01:56:19PM 1 point [-]

So I don't know what you expect such a "theorem" and such "proofs" to look like, without containing several volumes descriptive in symbolic form of the human context.

I have no such expectation either. But I do expect something, for what use is meta-ethics if no ethics results, or at least, practical procedures for discovering ethics?

What do you have in mind by "a description in symbolic form of the human context"? The Cyc database? What would you do with it?

Comment author: ArisKatsaris 11 December 2012 02:22:55PM 2 points [-]

for what use is meta-ethics if no ethics results, or at least, practical procedures for discovering ethics?

We have the processing unit called "brain" which does contain our understanding of the human context and therefore can plug a context parameter into a metaethical philosophy and thus derive an ethic. But we can't currently express the functioning of the brain as theorems and proofs -- our understanding of its working is far fuzzier than that.

I expect that the use of metaethic in AI development would similarly be so that the AI has something to plug its understanding of the human context into.

Comment author: Peterdjones 10 December 2012 09:43:53PM *  2 points [-]

Where moral judgment is concerned, it's logic all the way down. [..] And since grinding up the universe won't and shouldn't yield any miniature '>' tokens, it must be a logical ordering

The claim seems to be that moral judgement--first-order, not metaethical--is purely logical, but the justification ("grinding up the universe") only seems to go as far as showing it to be necessarily partly logical. And first-order ethics clearly has empirical elements. If human biology was such as to lay eggs and leave them to fend for themselves, there would be no immorailty in "child neglect".

Comment author: Klao 14 December 2012 11:26:39AM 3 points [-]

The funny thing is, that the rationalist Clippy would endorse this article. (He would probably put more emphasis on clippyflurphsness rather than this unclipperiffic notion of "justness", though. :))

Comment author: JoshuaFox 10 December 2012 12:30:36PM 2 points [-]

Is Schmidhuber's formalization of elegance the sort of thing you are seeking to do with rightness?

Comment author: shminux 10 December 2012 04:00:55PM *  2 points [-]

Scott Adams on the same subject, the morning after your post:

fairness isn't a real thing. It's just a psychological phenomenon that is easily manipulated.

[...]

To demonstrate my point that fairness is about psychology and not the objective world, I'll ask you two questions and I'd like you to give me the first answer that feels "fair" to you. Don't read the other comments until you have your answer in your head.

Here are the questions:

A retired businessman is worth one billion dollars. Thanks to his expensive lifestyle and hobbies, his money supports a number of people, such as his chauffeur, personal assistant, etc. Please answer these two questions:

  1. How many jobs does a typical retired billionaire (with one billion in assets) support just to service his lifestyle? Give me your best guess.

  2. How many jobs should a retired billionaire (with one billion in assets) create for you to feel he has done enough for society such that his taxes should not go up? Is ten jobs enough? Twenty?

Comment author: drnickbone 12 December 2012 09:33:51PM *  1 point [-]

I suppose one obvious response to this is "however much utility the billionaire can create by spending his wealth, a very much higher level of utility would be created by re-distributing his billions to lots of other people, who need it much more than he does". Money has a declining marginal utility, much like everything else.

Naturally, if you try to redistribute all wealth then no-one will have any incentive to create it in the first place, but this just creates a utilitarian trade-off on how much to tax, what to tax, and who to tax. It's still very likely that the billionaire will lose in this trade-off.

Comment author: Eugine_Nier 13 December 2012 04:59:20AM 1 point [-]

fairness isn't a real thing. It's just a psychological phenomenon that is easily manipulated.

I could replace "fairness" with "truth" in that sentence and come up with equally good examples.

Comment author: [deleted] 11 December 2012 11:58:19AM *  1 point [-]

And there are others who accept that physics and logic is everything, but who - I think mistakenly - go ahead and also accept Death's stance that this makes morality a lie, or, in lesser form, that the bright alive feeling can't make it. (Sort of like people who accept an incompatibilist theory of free will, also accept physics, and conclude with sorrow that they are indeed being controlled by physics.)

I think that's a misapplication of reductionism (the thing I think Eliezer is thinking about he said it was mistakenly), where people take something they've logically attached to a value, and then reduce it to something else, which starts to feel like they can't reattach it to whatever they thought had the value in the first place.

To make an example it could be said that action A leads to result Y, and that result Y feels like a good thing, so action A feels like a good thing to do. So the person reduces their map of action A leading to result Y so that it no longer contains these things they associate into their feelings or values, because they momentarily look different. Then they can no longer associate action A with the feeling/value they had associatd result Y into, and it feels like action A can't be "moral" or "good" or whatever. (like, if you imagine "atoms bouncing around" instead of "giving food to starving people")

I think this tendency is also linked, sometimes at least, to people's mistake avoiding hesitancy. Or rather having a cautious way of doing things because of wanting to avoid mistakes. So in order to avoid making the mistake of being immoral, you wan't to be able to logically derive moral or immoral actions, and since morality seems reducible to nothing, it seems that this task is not possible. Kind of like you want to double check on your actions objectively, and when you hit this point of failure, it feels like you can't take the actions themselves, because you're used to doing actions this way. But anyway that was just random speculation and it's probably nonsense. Also I didn't mean to "box away" people's habits. I think it's often very useful to be cautious.

I think that reductionism, when misunderstood, can make the world look like a bucketful of nihilistic goo. Especially if it's used to devalue.

Clippy doesn't judge between self-modifications by computing justifications, but rather, computing clippyflurphs.

Clippy would encounter "ethical" dilemmas of the sort: Is it better ..err.. moreclippy to have 1 big paperclip, or 3 small paperclips? A line of many clips? Or a big clip made of smaller clips? Is it moreclippy to have 10 clips today and 20 clips tomorrow, or, 0 clips today and 30 clips tomorrow?

Just joking.. :)

edit: added " " to ethical

Comment author: Viliam_Bur 16 December 2012 10:57:23AM 2 points [-]

Clippy would encounter ethical dilemmas of the sort: Is it better ..err.. moreclippy to have 1 big paperclip, or 3 small paperclips? A line of many clips? Or a big clip made of smaller clips? Is it moreclippy to have 10 clips today and 20 clips tomorrow, or, 0 clips today and 30 clips tomorrow?

Clippy could have these dilemmas. But they wouldn't be ethical dilemmas. They would be clippy dilemmas.

Comment author: Nominull 10 December 2012 05:38:32AM 1 point [-]

You talk like you've solved qualia. Have you?

Comment author: CronoDAS 10 December 2012 08:10:48AM 11 points [-]

"Qualia" is something our brains do. We don't know how our brains do it, but it's pretty clear by now that our brains are indeed what does it.

Comment author: Peterdjones 10 December 2012 12:38:55PM 6 points [-]

That's about 10% of a solution. The "how" is enough to keep most contemorary dualism afloat.

Comment author: RobbBB 11 December 2012 01:23:49AM 2 points [-]

We have prima facie reason to accept both of these claims:

  1. A list of all the objective, third-person, physical facts about the world does not miss any facts about the world.
  2. Which specific qualia I'm experiencing is functionally/causally underdetermined; i.e., there doesn't seem even in principle to be any physically exhaustive reason redness feels exactly as it does, as opposed to feeling like some alien color.

1 is physicalism; 2 is the hard problem. Giving up 1 means endorsing dualism or idealism. Giving up 2 means endorsing reductive or eliminative physicalism. All of these options are unpalatable. Reductionism without eliminating anything seems off the table, since the conceivability of zombies seems likely to be here to stay, to remain as an 'explanatory gap.' But eliminativism about qualia means completely overturning our assumption that whatever's going on when we speak of 'consciousness' involves apprehending certain facts about mind. I think this last option is the least terrible out of a set of extremely terrible options; but I don't think the eliminative answer to this problem is obvious, and I don't think people who endorse other solutions are automatically crazy or unreasonable.

That said, the problem is in some ways just academic. Very few dualists these days think that mind isn't perfectly causally correlated with matter. (They might think this correlation is an inexplicable brute fact, but fact it remains.) So none of the important work Eliezer is doing here depends on monism. Monism just simplifies matters a great deal, since it eliminates the worry that the metaphysical gap might re-introduce an epistemic gap into our model.

Comment author: Eugine_Nier 11 December 2012 01:53:39AM 1 point [-]
  1. A list of all the objective, third-person, physical facts about the world does not miss any facts about the world.

What's your reason for believing this? The standard empiricist argument against zombies is that they don't constrain anticipated experience.

One problem with this line of thought is that we've just thrown out the very concept of "experience" which is the basis of empiricism. The other problem is that the statement is false: the question of whether I will become a zombie tomorrow does constrain my anticipated experiences; specifically, it tells me whether I should anticipate having any.

Comment author: RobbBB 11 December 2012 02:12:00AM *  2 points [-]

I'm not a positivist, and I don't argue like one. I think nearly all the arguments against the possibility of zombies are very silly, and I agree there's good prima facie evidence for dualism (though I think that in the final analysis the weight of evidence still favors physicalism). Indeed, it's a good thing I don't think zombies are impossible, since I think that we are zombies.

What's your reason for believing this?

My reason is twofold: Copernican, and Occamite.

Copernican reasoning: Most of the universe does not consist of humans, or anything human-like; so it would be very surprising to learn that the most fundamental metaphysical distinction between facts ('subjective' v. 'objective,' or 'mental' v. 'physical,' or 'point-of-view-bearing' v. 'point-of-view-lacking, 'or what-have-you) happens to coincide with the parts of the universe that bear human-like things, and the parts that lack human-like things. Are we really that special? Is it really more likely that we would happen to gain perfect, sparkling insight into a secret Hidden Side to reality, than that our brains would misrepresent their own ways of representing themselves to themselves?

Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds). It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description -- the impersonal, 'objective' kind, which states a fact without specifying for whom the fact is. The world didn't need to turn out to be that way, just as it didn't need to look causally structured. This should give us reason to think that there may not be distinctions between fundamental kinds of facts, rather than that we happen to have lucked out and ended up in one of the universes with very few distinctions of this sort.

Neither of these considerations, of course, is conclusive. But they give us some reason to at least take seriously physicalist hypotheses, and to weight their theoretical costs and benefits against the dualists'.

One problem with this line of thought is that we've just thrown out the very concept of "experience" which is the basis of empiricism.

We've thrown out the idea of subjective experience, of pure, ineffable 'feels,' of qualia. But we retain any functionally specifiable analog of such experience. In place of qualitative red, we get zombie-red, i.e., causal/functional-red. In place of qualitative knowledge, we get zombie-knowledge.

And since most dualists already accepted the causal/functional/physical process in question (they couldn't even motivate the zombie argument if they didn't consider the physical causally adequate), there can be no parsimony argument against the physicalists' posits; the only argument will have to be a defense of the claim that there is some sort of basic, epistemically infallible acquaintance relation between the contents of experience and (themselves? a Self??...). But making such an argument, without begging the question against eliminativism, is actually quite difficult.

Comment author: Peterdjones 13 December 2012 07:25:40PM *  1 point [-]

Copernican reasoning: Most of the universe does not consist of humans, or anything human-like; so it would be very surprising to learn that the most fundamental metaphysical distinction between facts ('subjective' v. 'objective,' or 'mental' v. 'physical,' or 'point-of-view-bearing' v. 'point-of-view-lacking, 'or what-have-you) happens to coincide with the parts of the universe that bear human-like things, and the parts that lack human-like things. Are we really that special? Is it really more likely that we would happen to gain perfect, sparkling insight into a secret Hidden Side to reality, than that our brains would misrepresent their own ways of representing themselves to themselves?

It's not surprising that a system should have special insight into itself. If a type of system had special insight into some other, unrelated, type of system, then that would be peculiar. If every systems had insights (panpsychism) that would also be peculiar. But a system, one capable of haing insights, having special insights into itself is not unexpected

Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds).

That is not obvious. If the two kinds of stuff (or rather property) are fine-grainedly picked from some space of stuffs (or rather properties), then that would be more unlikely that just one being picked.

OTOH, if you have a just one, coarse-grained kind of stuff, and there is just one other coarse-grained kind of stuff, such that the two together cover the space of stuffs, then it is a mystery why you do not have both, ie every possible kind of stuff. A concrete example is the predominance of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.

(It's all about information and probability. Adding one fine grained kind of stuff to another means that two low probabilities get multiplies together, leading to a very low one that needs a lot of explainging. Having every logically possible kind of stuff has a high probability, because we don't need a lot of information to pinpoint the universe).

So..if you think of Mind as some very specific thing, the Occamite objection goes through. However, modern dualists are happy that most aspects of consciousness have physical explanations. Chalmers-style dualism is about explaining qualia, phenomenal qualities. The quantitative properties (Chalmers calls them stuctural-functional) of physicalism and intrinsically qualitative properties form a dyad that covers property-space in the same way that the matter-antimatter dyad covers stuff-space. In this way, modern dualism can avoid the Copernican Objection.

It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description -- the impersonal, 'objective' kind, which states a fact without specifying for whom the fact is.

(Here comes the shift from properties to aspects).

Although it does specify that the fact is outside me. If physical and mental properties are both intrinsic to the world, then the physical properties seem to be doing most of the work, and the mental ones seem redundant. However, if objectivity is seen as a perspective, ie an external perspective, it is no longer an empirical fact. It is then a tautology that the external world will seem, from the outside, to be objective, becaue objectivity just is the view from outside. And subjectivity, likewise, is the view from inside, and not any extra stuff, just another way of looking at the same stuff. There are in any case, a set of relations between a thing-and-itself, and another set between a thing-and-other-things Nothing novel is being introduced by noting the existence of inner and outer aspects. The novel content of the Dual Aspect solution lies on identifying the Objective Perspective with quantities (broadly including structures and functions) and the Subjective Perspective with qualities, so that Subjective Qualities, qualia, are just how neuronal processing seems from the inside. This point needs justication, which I believe I have, but will not nmention here.

As far as physicalism is concerned: physicalism has many meanings. Dual aspect theory is incompatible with the idea that the world is instrinsically objective and physical, since these are not intrinsic charateristics, accoding to DAT. DAT is often and rightly associated with neutral monism, the idea that the world is in itself neither mental nor physical, neither objective nor subjective. However, this in fact changes little for most physicalists: it does not suggest that there are any ghostly substances or indetectable properties. Nothing changes methodologically; naturalism, inerpreted as the investigation of the world from the objetive perspective can continue. The Strong Physicalist claim that a complete phyiscal description of the world is a complete dsecription tout court becomes problematic. Although such a description is a description of everything, it nonetheless leaves out the subjective perspectives embedded in it, which cannot be recovered just as Mary the superscientist cannot recover the subjective sensation of Red from the information she has. I believe that a correct understanding of the nature of information shows that "complete information" is a logically incoherent notion in any case, so that DAT does not entail the loss of anything that was ever available in that respect. Furthermore, the absence of complete information has little practical upshot because of the unfeasability of constructing such a complete decription in the first place. All in all, DAT means physicalism is technically false in a way that changes little in practice. The flipside of DAT is Neutral Monism. NM is an inherently attractive metaphsycis, because it means that the universe has no overall characteristic left dangling in need of an explanation -- no "why physical, rather than mental?".

As far as causality is concerned, the fact that a system's physical or objective aspects are enough to predict its behaviour does not mean that its subjective aspects are an unnecessary multiplication of entities, since they are only a different perspective on the same reality. Causal powers are vested in the neutral reality of which the subjective and the objective are just aspects. The mental is neither causal in itself, or causally idle in itself, it is rather a persepctive on what is causally empowered. There are no grounds for saying that either set of aspects is exclusively responsible for the causal behaviour of the system, since each is only a perspective on the system.

I have avoided the Copernican problem, special pleading for human consciousness by pinning mentality, and particualrly subjectivity to a system's internal and self-refexive relations. The counterpart to excesive anthropocentricism is insufficient anthopocentricism, ie free-wheeling panpsychism, or the Thinking Rock problem. I believe I have a way of showing that it is logically ineveritable that simple entities cannot have subjective states that are significantly different from their objective descriptions.

Comment author: Eugine_Nier 11 December 2012 02:47:02AM 1 point [-]

Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds). It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description -- the impersonal, 'objective' kind, which states a fact without specifying for whom the fact is. The world didn't need to turn out to be that way, just as it didn't need to look causally structured. This should give us reason to think that there may not be distinctions between fundamental kinds of facts, rather than that we happen to have lucked out and ended up in one of the universes with very few distinctions of this sort.

The problem is that we already have two kinds of fundamental facts, (and I would argue we need more). Consider Eliezer's use of "magical reality fluid" in this post. If you look at context, it's clear that he's trying to ask whether the inhabitants of the non-causally stimulated universes poses qualia without having to admit he cares about qualia.

Comment author: RobbBB 11 December 2012 02:55:52AM *  2 points [-]

Eliezer thinks we'll someday be able to reduce or eliminate Magical Reality Fluid from our model, and I know of no argument (analogous to the Hard Problem for phenomenal properties) that would preclude this possibility without invoking qualia themselves. Personally, I'm an agnostic about Many Worlds, so I'm even less inclined than EY to think that we need Magical Reality Fluid to recover the Born probabilities.

I also don't reify logical constructs, so I don't believe in a bonus category of Abstract Thingies. I'm about as monistic as physicalists come. Mathematical platonists and otherwise non-monistic Serious Scientifically Minded People, I think, do have much better reason to adopt dualism than I do, since the inductive argument against Bonus Fundamental Categories is weak for them.

Comment author: Eugine_Nier 13 December 2012 04:11:14AM *  1 point [-]

Eliezer thinks we'll someday be able to reduce or eliminate Magical Reality Fluid from our model, and I know of no argument (analogous to the Hard Problem for phenomenal properties) that would preclude this possibility without invoking qualia themselves.

I could define the Hard Problem of Reality, which really is just an indirect way of talking about the Hard Problem of Consciousness.

Personally, I'm an agnostic about Many Worlds, so I'm even less inclined than EY to think that we need Magical Reality Fluid to recover the Born probabilities.

As Eliezer discuses in the post, Reality Fluid isn't just for Many Worlds, it also relates to questions about stimulation.

I also don't reify logical constructs

Here's my argument for why you should.

Comment author: [deleted] 10 December 2012 03:01:30PM 4 points [-]

Daniel Dennett's 'Quining Qualia' (http://ase.tufts.edu/cogstud/papers/quinqual.htm) is taken ('round these parts) to have laid the theory of qualia to rest. Among philosophers, the theory of qualia and the classical empiricism founded on it are also considered to be dead theories, though it's Sellers "Empiricism and the Philosophy of Mind" (http://www.ditext.com/sellars/epm.html) that is seen to have done the killing.

Comment author: ArisKatsaris 10 December 2012 04:07:24PM 6 points [-]

Daniel Dennett's 'Quining Qualia' (http://ase.tufts.edu/cogstud/papers/quinqual.htm) is taken ('round these parts) to have laid the theory of qualia to rest.

I've not actually read this essay (will do so later today), but I disagree that most people here consider the issue of qualia and the "hard problem of consciousness" to be a solved one.

Time for a poll.

Submitting...

Comment author: [deleted] 11 December 2012 03:08:10AM 4 points [-]

I just read 'Quining Qualia'. I do not see it as a solution to the hard problem of consciousness, at all. However, I did find it brilliant - it shifted my intuition from thinking that conscious experience is somehow magical and inexplicable to thinking that it is plausible that conscious experience could, one day, be explained physically. But to stop here would be to give a fake explanation...the problem has not yet been solved.

A triumphant thundering refutation of [qualia], an absolutely unarguable proof that [qualia] cannot exist, feels very satisfying—a grand cheer for the home team. And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.

-- Eliezer Yudkowsky, Dissolving the Question

Also, does anyone disagree with anything that Dennett says in the paper, and, if so, what, and why?

Comment author: Peterdjones 11 December 2012 12:42:21PM 2 points [-]

I think I have qualia. I probably don't have qualia as defined by Dennett, as simultaneously ineffable, intrinsic, etc, but there are nonetheless ways things seem to me.

Comment author: Eliezer_Yudkowsky 10 December 2012 11:25:28PM 2 points [-]

I haven't read either of those but will read them. Also I totally think there was a respectable hard problem and can only stare somewhat confused at people who don't realize what the fuss was about. I don't agree with what Chalmers tries to answer to his problem, but his attempt to pinpoint exactly what seems so confusing seems very spot-on. I haven't read anything very impressive yet from Dennett on the subject; could be that I'm reading the wrong things. Gary Drescher on the other hand is excellent.

It could be that I'm atypical for LW.

EDIT: Skimmed the Dennett one, didn't see much of anything relatively new there; the Sellers link fails.

Comment author: Karl 11 December 2012 03:52:51AM 3 points [-]

Also I totally think there was a respectable hard problem

So you do have a solution to the problem?

Comment author: RobbBB 11 December 2012 02:38:20AM *  1 point [-]

Among philosophers, the theory of qualia and the classical empiricism founded on it are also considered to be dead theories

Do you have evidence of this? The PhilPapers survey suggests that only 56.5% of philosophers identify as 'physicalists,' and 59% think that zombies are conceivable (though most of these think zombies are nevertheless impossible). It would also help if you explained what you mean by 'the theory of qualia.'

though it's Sellers "Empiricism and the Philosophy of Mind" (http://www.ditext.com/sellars/epm.html) that is seen to have done the killing.

Sellars' argument, I think, rests on a few confusions and shaky assumptions. I agree this argument is still extremely widely cited, but I think that serious epistemologists no longer consider it conclusive, and a number reject it outright. Jim Pryor writes:

These anti-Given arguments deserve a re-examination, in light of recent developments in the philosophy of mind. The anti-Given arguments pose a dilemma: either (i) direct apprehension is not a state with proposition content, in which case it's argued to be incapable with providing us with justification for believing any specific proposition; or (ii) direct apprehension is a state with propositional content. This second option is often thought to entail that direct apprehension is a kind of believing, and hence itself would need justification. But it ought nowadays to be very doubtful that the second option does entail such things. These days many philosophers of mind construe perceptual experience as a state with propositional content, even thought experience is distinct from, and cannot be reduced to, any kind of belief. Your experiences represent the world to you as being a certain way, and the way they represent the world as being is their propositional content. Now, surely, its looking to you as if the world is a certain way is not a kind of state for which you need any justification. Hence, this construal of perceptual experience seems to block the step from 'has propositional content' to 'needs justification'. Of course, what are 'apprehended' by perceptual experiences are facts about your perceptual environment, rather than facts about your current mental states. But it should at least be clear that the second horn of the anti-Given argument needs more argument than we've seen so far.

Comment author: non-expert 10 February 2013 03:37:36AM *  1 point [-]

if we confess that 'right' lives in a world of physics and logic - because everything lives in a world of physics and logic - then we have to translate 'right' into those terms somehow.

A different perspective i'd like people's thoughts on: is it more accurate to say that everything WE KNOW lives in a world of physics and logic, and thus translating 'right' into those terms is correct assuming right and wrong (fairness, etc.) are defined within the bounds of what we know.

I'm wondering if you would agree that you're making an implicit philosophical argument in your quoted language -- namely that necessary knowledge (for right/wrong, or anything else) is within human comprehension, or to say it differently, by ignoring philosophical questions (e.g. who am i and what is the world, among others) you are effectively saying those questions and potential answers are irrelevant to the idea of right/wrong.

If you agree, that position, though most definitely reasonable, cannot be proven within the standards set by rational thought. Doesn't the presence of that uncertainty necessitate consideration of it as a possibility, and how do you weigh that uncertainty against the assumption that there is none?

To be clear, this is not a criticism. This is an observation that I think is reasonable, but interested to see how you would respond to it.

Comment author: Irgy 11 December 2012 04:42:39AM 1 point [-]

rightness plays no role in that-which-is-maximized by the blind processes of natural selection

That being the case, what is it about us that makes us care about "rightness" then? What reason do you have for believing that the logical truth of what is right will has more influence on human behaviour than it would on any other general intelligence?

Certainly I can agree that there's reasons to worry another intelligence might not care about what's "right", since not every human really cares that much about it either. But it feels like your expected level of caring is "not at all", whereas my expected level of caring is "about as much as we do". Don't get me wrong, the variance in my estimate and the risk involved is still enough to justify the SI and its work. I just wonder about the difference between the two estimates.