Followup toYou Provably Can't Trust Yourself

Yesterday I discussed the difference between:

  • A system that believes—is moved by—any specific chain of deductions from the axioms of Peano Arithmetic.  (PA, Type 1 calculator)
  • A system that believes PA, plus explicitly asserts the general proposition that PA is sound.  (PA+1, meta-1-calculator that calculates the output of Type 1 calculator)
  • A system that believes PA, plus explicitly asserts its own soundness.  (Self-PA, Type 2 calculator)

These systems are formally distinct.  PA+1 can prove things that PA cannot.  Self-PA is inconsistent, and can prove anything via Löb's Theorem.

With these distinctions in mind, I hope my intent will be clearer, when I say that although I am human and have a human-ish moral framework, I do not think that the fact of acting in a human-ish way licenses anything.

I am a self-renormalizing moral system, but I do not think there is any general license to be a self-renormalizing moral system.

And while we're on the subject, I am an epistemologically incoherent creature, trying to modify his ways of thinking in accordance with his current conclusions; but I do not think that reflective coherence implies correctness.

Let me take these issues in reverse order, starting with the general unlicensure of epistemological reflective coherence. 

If five different people go out and investigate a city, and draw five different street maps, we should expect the maps to be (mostly roughly) consistent with each other.  Accurate maps are necessarily consistent among each other and among themselves, there being only one reality.  But if I sit in my living room with my blinds closed, I can draw up one street map from my imagination and then make four copies: these five maps will be consistent among themselves, but not accurate. Accuracy implies consistency but not the other way around.

In Where Recursive Justification Hits Bottom, I talked about whether "I believe that induction will work on the next occasion, because it's usually worked before" is legitimate reasoning, or "I trust Occam's Razor because the simplest explanation for why Occam's Razor often works is that we live in a highly ordered universe".  Though we actually formalized the idea of scientific induction, starting from an inductive instinct; we modified our intuitive understanding of Occam's Razor (Maxwell's Equations are in fact simpler than Thor, as an explanation for lightning) based on the simple idea that "the universe runs on equations, not heroic mythology".  So we did not automatically and unthinkingly confirm our assumptions, but rather, used our intuitions to correct them—seeking reflective coherence.

But I also remarked:

"And what about trusting reflective coherence in general?  Wouldn't most possible minds, randomly generated and allowed to settle into a state of reflective coherence, be incorrect?  Ah, but we evolved by natural selection; we were not generated randomly."

So you are not, in general, safe if you reflect on yourself and achieve internal coherence.  The Anti-Inductors who compute that the probability of the coin coming up heads on the next occasion, decreases each time they see the coin come up heads, may defend their anti-induction by saying:  "But it's never worked before!"

The only reason why our human reflection works, is that we are good enough to make ourselves better—that we had a core instinct of induction, a core instinct of simplicity, that wasn't sophisticated or exactly right, but worked well enough.

A mind that was completely wrong to start with, would have no seed of truth from which to heal itself.  (It can't forget everything and become a mind of pure emptiness that would mysteriously do induction correctly.)

So it's not that reflective coherence is licensed in general, but that it's a good idea if you start out with a core of truth or correctness or good priors.  Ah, but who is deciding whether I possess good priors?  I am!  By reflecting on them!  The inescapability of this strange loop is why a broken mind can't heal itself—because there is no jumping outside of all systems.

I can only plead that, in evolving to perform induction rather than anti-induction, in evolving a flawed but not absolutely wrong instinct for simplicity, I have been blessed with an epistemic gift.

I can only plead that self-renormalization works when I do it, even though it wouldn't work for Anti-Inductors.  I can only plead that when I look over my flawed mind and see a core of useful reasoning, that I am really right, even though a completely broken mind might mistakenly perceive a core of useful truth.

Reflective coherence isn't licensed for all minds.  It works for me, because I started out with an epistemic gift.

It doesn't matter if the Anti-Inductors look over themselves and decide that their anti-induction also constitutes an epistemic gift; they're wrong, I'm right.

And if that sounds philosophically indefensible, I beg you to step back from philosophy, and conside whether what I have just said is really truly true.

(Using your own concepts of induction and simplicity to do so, of course.)

Does this sound a little less indefensible, if I mention that PA trusts only proofs from the PA axioms, not proofs from every possible set of axioms?  To the extent that I trust things like induction and Occam's Razor, then of course I don't trust anti-induction or anti-Occamian priors—they wouldn't start working just because I adopted them.

What I trust isn't a ghostly variable-framework from which I arbitrarily picked one possibility, so that picking any other would have worked as well so long as I renormalized it.  What I trust is induction and Occam's Razor, which is why I use them to think about induction and Occam's Razor.

(Hopefully I have not just licensed myself to trust myself; only licensed being moved by both implicit and explicit appeals to induction and Occam's Razor.  Hopefully this makes me PA+1, not Self-PA.)

So there is no general, epistemological license to be a self-renormalizing factual reasoning system.

The reason my system works is because it started out fairly inductive—not because of the naked meta-fact that it's trying to renormalize itself using any system; only induction counts.  The license—no, the actual usefulness—comes from the inductive-ness, not from mere reflective-ness.  Though I'm an inductor who says so!

And, sort-of similarly, but not exactly analogously:

There is no general moral license to be a self-renormalizing decision system.  Self-consistency in your decision algorithms is not that-which-is-right.

The Pebblesorters place the entire meaning of their lives in assembling correct heaps of pebbles and scattering incorrect ones; they don't know what makes a heap correct or incorrect, but they know it when they see it.  It turns out that prime heaps are correct, but determining primality is not an easy problem for their brains.  Like PA and unlike PA+1, the Pebblesorters are moved by particular and specific arguments tending to show that a heap is correct or incorrect (that is, prime or composite) but they have no explicit notion of "prime heaps are correct" or even "Pebblesorting People can tell which heaps are correct or incorrect". They just know (some) correct heaps when they see them, and can try to figure out the others.

Let us suppose by way of supposition, that when the Pebblesorters are presented with the essence of their decision system—that is, the primality test—they recognize it with a great leap of relief and satisfaction.  We can spin other scenarios—Peano Arithmetic, when presented with itself, does not prove itself correct.  But let's suppose that the Pebblesorters recognize a wonderful method of systematically producing correct pebble heaps.  Or maybe they don't endorse Adleman's test as being the essence of correctness—any more than Peano Arithmetic proves that what PA proves is true—but they do recognize that Adleman's test is a wonderful way of producing correct heaps.

Then the Pebblesorters have a reflectively coherent decision system.

But this does not constitute a disagreement between them and humans about what is right, any more than humans, in scattering a heap of 3 pebbles, are disagreeing with the Pebblesorters about which numbers are prime!

The Pebblesorters are moved by arguments like "Look at this row of 13 pebbles, and this row of 7 pebbles, arranged at right angles to each other; how can you see that, and still say that a heap of 91 pebbles is correct?"

Human beings are moved by arguments like "Hatred leads people to play purely negative-sum games, sacrificing themselves and hurting themselves to make others hurt still more" or "If there is not the threat of retaliation, carried out even when retaliation is profitless, there is no credible deterrent against those who can hurt us greatly for a small benefit to themselves".

This is not a minor difference of flavors.  When you reflect on the kind of arguments involved here, you are likely to conclude that the Pebblesorters really are talking about primality, whereas the humans really are arguing about what's right.  And I agree with this, since I am not a moral relativist.  I don't think that morality being moral implies any ontologically basic physical rightness attribute of objects; and conversely, I don't think the lack of such a basic attribute is a reason to panic.

I may have contributed to the confusion here by labeling the Pebblesorters' decisions "p-right".  But what they are talking about is not a different brand of "right".  What they're talking about is prime numbers.  There is no general rule that reflectively coherent decision systems are right; the Pebblesorters, in merely happening to implement a reflectively coherent decision system, are not yet talking about morality!

It's been suggested that I should have spoken of "p-right" and "h-right", not "p-right" and "right".

But of course I made a very deliberate decision not to speak of "h-right".  That sounds like there is a general license to be human.

It sounds like being human is the essence of rightness.  It sounds like the justification framework is "this is what humans do" and not "this is what saves lives, makes people happy, gives us control over our own lives, involves us with others and prevents us from collapsing into total self-absorption, keeps life complex and non-repeating and aesthetic and interesting, dot dot dot etcetera etcetera".

It's possible that the above value list, or your equivalent value list, may not sound like a compelling notion unto you.  Perhaps you are only moved to perform particular acts that make people happy—not caring all that much yet about this general, explicit, verbal notion of "making people happy is a value".  Listing out your values may not seem very valuable to you.  (And I'm not even arguing with that judgment, in terms of everyday life; but a Friendly AI researcher has to know the metaethical score, and you may have to judge whether funding a Friendly AI project will make your children happy.)  Which is just to say that you're behaving like PA, not PA+1.

And as for that value framework being valuable because it's human—why, it's just the other way around: humans have received a moral gift, which Pebblesorters lack, in that we started out interested in things like happiness instead of just prime pebble heaps.

Now this is not actually a case of someone reaching in from outside with a gift-wrapped box; any more than the "moral miracle" of blood-soaked natural selection producing Gandhi, is a real miracle.

It is only when you look out from within the perspective of morality, that it seems like a great wonder that natural selection could produce true friendship.  And it is only when you look out from within the perspective of morality, that it seems like a great blessing that there are humans around to colonize the galaxies and do something interesting with them.  From a purely causal perspective, nothing unlawful has happened.

But from a moral perspective, the wonder is that there are these human brains around that happen to want to help each other—a great wonder indeed, since human brains don't define rightness, any more than natural selection defines rightness.

And that's why I object to the term "h-right".  I am not trying to do what's human.  I am not even trying to do what is reflectively coherent for me.  I am trying to do what's right.

It may be that humans argue about what's right, and Pebblesorters do what's prime.  But this doesn't change what's right, and it doesn't make what's right vary from planet to planet, and it doesn't mean that the things we do are right in mere virtue of our deciding on them—any more than Pebblesorters make a heap prime or not prime by deciding that it's "correct".

The Pebblesorters aren't trying to do what's p-prime any more than humans are trying to do what's h-prime.  The Pebblesorters are trying to do what's prime.  And the humans are arguing about, and occasionally even really trying to do, what's right.

The Pebblesorters are not trying to create heaps of the sort that a Pebblesorter would create (note circularity).  The Pebblesorters don't think that Pebblesorting thoughts have a special and supernatural influence on whether heaps are prime.  The Pebblesorters aren't trying to do anything explicitly related to Pebblesorters—just like PA isn't trying to prove anything explicitly related to proof.  PA just talks about numbers; it took a special and additional effort to encode any notions of proof in PA, to make PA talk about itself.

PA doesn't ask explicitly whether a theorem is provable in PA, before accepting it—indeed PA wouldn't care if it did prove that an encoded theorem was provable in PA.  Pebblesorters don't care what's p-prime, just what's prime.  And I don't give a damn about this "h-rightness" stuff: there's no license to be human, and it doesn't justify anything.

 

Part of The Metaethics Sequence

Next post: "Invisible Frameworks"

Previous post: "You Provably Can't Trust Yourself"

New Comment
54 comments, sorted by Click to highlight new comments since:

I now get what you meant by keeping the different 'levels' or layers separate – that notion, or record-keeping, of what's happening, in your proof, in PA versus PA+1. I loved my math courses precisely because of the computational logic.

Your meta-ethics: We're not right, nor will any of our descendants [?] ever be right. But we know some things that ARE right, just as a matter of inevitable ev-bio competence. And we've figured out a few ways to learn new things that are right. We've explored a bit of right, and even a bit of right+1. But we don't want to go near self-right – if we ASSUME we're accurate in our current understanding of right or right+1, we can (falsely) justify anything.

can people actually follow yudkowsky's posts?

Some do ;) I won't say that I fully got every point of each of the posts of that Sequence (which is harder for me to follow than the ones about reductionism or QM) but I did understand many parts of them, and they are very interesting (and I happen to agree with most of it).

Edit : forgot to add : it does definitely do a lot to solve a problem that I was pondering for a long time, but unable to fully solve (even if I had found some parts of the answer) which is how to reject the "morality given by a God" without falling into moral relativism.

"I can only plead that when I look over my flawed mind and see a core of useful reasoning, that I am really right, even though a completely broken mind might mistakenly perceive a core of useful truth."

"humans have received a moral gift, which Pebblesorters lack, in that we started out interested in things like happiness instead of just prime pebble heaps. Now this is not actually a case of someone reaching in from outside with a gift-wrapped box... it is only when you look out from within the perspective of morality, that it seems like a great blessing that there are humans around"

Quick question: do you intend the latter deflationary remarks to apply to your 'epistemic gift' too? That is, would you emphasize that your methods of reasoning are merely considered to be a gift from within your own perspective, and there's not any further sense to the notion of 'good priors' or a 'broken mind' or 'useful reasoning' beyond the brute fact that you happen to use these words to refer to these particular epistemic norms? Or do you think there's an important difference between the kind of (moral vs. epistemic) 'mistakes' made respectively by the Pebblesorters and the anti-Inductors?

With these distinctions in mind, I hope my intent will be clearer, when I say that although I am human and have a human-ish moral framework, I do not think that the fact of acting in a human-ish way licenses anything.

hah. I was wondering what this Lob stuff had to do with morality.

good job on an excellent post.

"But of course I made a very deliberate decision not to speak of "h-right". That sounds like there is a general license to be human."

Okay, this is a good point, and a good post.

I still think, however, you're left with the empirical question of how strongly psychological unity applies to moral dynamics: to what extent (if any) different people are just different optimization processes with nothing to argue about.

Eliezer: Good post, as always, I'll repeat that I think you're closer to me in moral philosophy than anyone else I've talked to, with the probable exception of Richard Rorty, from whom I got many of my current views. (You might want to read Contingency, Irony, Solidarity; it's short, and it talks about a lot of the stuff you deal with here). That said, I disagree with you in two places. Reading your stuff and the other comments has helped me refine what I think; I'll try to state it here as clearly as possible.

1) I think that, as most people use the words, you're a moral relativist. I understand why you think you're not. But the way most people use the word 'morality,' it would only apply to an argument that would persuade the ideal philosopher of perfect emptiness. You don't believe any such arguments exist; neither do I. Thus neither of us think that morality as it's commonly understood is a real phenomenon. Think of the priest in War of the Worlds who tried to talk to the aliens, explaining that since we're both rational beings/children of God, we can persuade them not to kill us because it's wrong. You say (as I understand you) that they would agree that it's wrong, and just not care, because wrong isn't necessarily something they care about. I have no problem with any claim you've made (well, that I've made on your behalf) here; but at this point the way you're using the word 'moral' isn't a way most people would use it. So you should use some other term altogether.

2) I like to maintain a clearer focus on the fact that, if you care about what's right, I care about what's right_1, which is very similar to but not the same as what's right. Mainly because it helps me to remember there are some things I'm just not going to convince other people of (e.g. I don't think I could convince the Pope that God doesn't exist. There's no fact pattern that's wholly inconsistent with the property god_exists, and the Pope has that buried deep enough in its priors that I don't think it's possible to root it out). But (as of reading your comment on yesterday's post) I don't think we disagree on the substance, just on the emphasis.

Thanks for an engaging series of posts; as I said, I think you're the closest or second-closest I've ever come across to someone sharing my meta-ethics.

Ah, but who is deciding whether I possess good priors? I am! By reflecting on them! ... I can only plead that ... I have been blessed with an epistemic gift.

It is only a coincidence that in these posts you have only described yourself better than hypothetical creatures? Do actual creatures tend to do better than the examples you've given?

[-]Roko00

Eli: I am not trying to do what's human. I am not even trying to do what is reflectively coherent for me. I am trying to do what's right.

This sounds a bit like:

"I am NOT drinking liquid H2O, I am drinking water, goddamit!"

Just as water is, as a matter of fact, H2O, your notion of right is (at least, from what you've told me it seems to be) something like "that which humans would do if they were reflectively coherent".

Also, Echoing Jadagul: as most people use the words, you're a moral relativist

Saying that "right = human" is to deny the idea of moral progress. If what the human thing to do can change over time, and what is right doesn't change, then they can't be the same thing.

Slavery was once very human. I think many of us (though not the relativists) would reject the claim that because it was human, it was also right. It was always wrong, regardless of how common.

If I understand this correctly, it would make the point clearer to distinguish between implementation of optimization process and a passive concept of right. Humans are intelligent agents that, among other things, use a concept of right in their cognitive algorithm. It is hard to separate this concept from the process of applying it in the specific way it's done in humans, but this concept is not about the way its implementation is interpreted. You can't appeal to the implementation of humans when talking about this concept, the question that only includes the physical structure of humans (plus adaptive environment) is not sufficient to lead to an answer that describes the concept of right, because you also need to know what to look for in this implementation, how to interpret information about this concept present in the interpretation, how to locate interfaces that allow to read out the information about it. Talking about right requires interpretation from the inside, and communicating the concept requires building the interpreter based on this inside view.

The full implementation of a human doesn't contain enough information to ask a question about what is right, without interpretation by a human. Given interpretation by a human, implementation only helps in arriving to a more precise form of the question, in moving towards the answer, by assembling a more coherent form of it against factual knowledge about the process by which implementation functions.

[-]Yvain2300

I was one of the people who suggested the term h-right before. I'm not great with mathematical logic, and I followed the proof only with difficulty, but I think I understand it and I think my objections remain. I think Eliezer has a brilliant theory of morality and that it accords with all my personal beliefs, but I still don't understand where it stops being relativist.

I agree that some human assumptions like induction and Occam's Razor have to be used partly as their own justification. But an ultimate justification of a belief has to include a reason for choosing it out of a belief-space.

For example, after recursive justification hits bottom, I keep Occam and induction because I suspect they reflect the way the universe really works. I can't prove it without using them. But we already know there are some things that are true but can't be proven. I think one of those things is that reality really does work on inductive and Occamian principles. So I can choose these two beliefs out of belief-space by saying they correspond to reality.

Some other starting assumptions ground out differently. Clarence Darrow once said something like "I hate spinach, and I'm glad I hate it, because if I liked it I'd eat it, and I don't want to eat it because I hate it." He's was making a mistake somewhere! If his belief is "spinach is bad", it probably grounds out in some evolutionary reason like insufficient energy for the EEA. But that doesn't justify his current statement "spinach is bad". His real reason for saying "spinach is bad" is that he dislikes it. You can only choose "spinach is bad" out of belief-space based on Clarence Darrow's opinions.

One possible definition of "absolute" vs. "relative": a belief is absolutely true if people pick it out of belief-space based on correspondence to reality; if people pick it out of belief-space based on other considerations, it is true relative to those considerations.

"2+2=4" is absolutely true, because it's true in the system PA, and I pick PA out of belief-space because it does better than, say, self-PA would in corresponding to arithmetic in the real world. "Carrots taste bad" is relatively true, because it's true in the system "Yvain's Opinions" and I pick "Yvain's Opinions" out of belief-space only because I'm Yvain.

When Eliezer say X is "right", he means X satisfies a certain complex calculation. That complex calculation is chosen out of all the possible complex-calculations in complex-calculation space because it's the one that matches what humans believe.

This does, technically, create a theory of morality that doesn't explicitly reference humans. Just like intelligent design theory doesn't explicitly reference God or Christianity. But most people believe that intelligent design should be judged as a Christian theory, because being a Christian is the only reason anyone would ever select it out of belief-space. Likewise, Eliezer's system of morality should be judged as a human morality, because being a human is the only reason anyone would ever select it out of belief-space.

That's why I think Eliezer's system is relative. I admit it's not directly relative, in that Eliezer isn't directly picking "Don't murder" out of belief-space every time he wonders about murder, based only on human opinion. But if I understand correctly, he's referring the question to another layer, and then basing that layer on human opinion.

An umpire whose procedure for making tough calls is "Do whatever benefits the Yankees" isn't very fair. A second umpire whose procedure is "Always follow the rules in Rulebook X" and writes in Rulebook X "Do whatever benefits the Yankees" may be following a rulebook, but he is still just as far from objectivity as the last guy was.

I think the second umpire's call is "correct" relative to Rulebook X, but I don't think the call is absolutely correct.

A way to justify Occam and Induction more explicitly is by appealing to natural selection. Take large groups of anti-inductor anti-occamians and occamian inductors, and put them in a partially-hostile environment. The Inductors will last much longer. Now, possibly the quality of maximizing inclusive fitness is somehow based on induction or Occam's Razor, but in a lawful universe it will usually be the case that the inductor wins.

This is a very clear articulation. Thank you.

Yvain, good anecdote about Darrow. I hate spinach too.

Say half the population takes a pill that makes them really, truly believe that murder is right. The way I understand Eliezer's assertion that his morals aren't relative, he'd say 'no, murder is still wrong', and would probably assert the same even if 100% of the population took the pills. The pill-takers would assert, and absolutely believe, that they were right. Not p-right, but right. I'd love to hear proof that the pill-takers are wrong, and that everyone else is right. Not p-right, but right.

Murder is always wrong to the extent that those abstract complex calculations will still come out the same. Is that the extent of the argument? If so, what does it even mean to say 'even if everyone believes murder is right, it's still wrong'? Where does that data exist? Smacks of dualism to me.

"Who does Oser serve?" "Himself. 'The fleet', he says, but the fleet serves Oser, so it's just a short circuit."

This talk about metaethics is trying to justify building castles in the clouds by declaring the foundation to be supported by the roof. It doesn't deal with the fundamental problem at all - it makes it worse.

This talk about metaethics is trying to justify building castles in the clouds by declaring the foundation to be supported by the roof. It doesn't deal with the fundamental problem at all - it makes it worse.

I don't know about that. Think of this regression:

Q: What's the roof held up by? A: The walls.

Q: So, what are the walls held up by? A: The floor.

Q: So what's the floor held up by? A: The foundation.

Q: So what's the foundation held up by? A: Bedrock.

Q: And what's the bedrock held up by? A: The planet.

Q: And what's the planet held up by? A: At that scale, "up" no longer has sufficient meaning for your question to make sense.

That doesn't mean "what's the planet held up by" is a gotcha - it means that at some point we went from a scale that we're familiar with, to a scale that we're not - and in that new scale, the dependency question that we've been asking recursively no longer makes semantic sense.

[-][anonymous]10

Or, to be very clear about it, once you get to "the planet", the answer is, "planet and bedrock pull all the other things down by their own planetary gravity, so the gravitational center of mass doesn't need to be held up by anything".

[-]Roko00

Yvain: "that's why I think Eliezer's system is relative. I admit it's not directly relative, in that Eliezer isn't directly picking "Don't murder" out of belief-space every time he wonders about murder, based only on human opinion. But if I understand correctly, he's referring the question to another layer, and then basing that layer on human opinion.

An umpire whose procedure for making tough calls is "Do whatever benefits the Yankees" isn't very fair. A second umpire whose procedure is "Always follow the rules in Rulebook X" and writes in Rulebook X "Do whatever benefits the Yankees" may be following a rulebook, but he is still just as far from objectivity as the last guy was.

I think the second umpire's call is "correct" relative to Rulebook X, but I don't think the call is absolutely correct."

  • I'd agree with this analysis. Well put.

@Roko

Also, Echoing Jadagul: as most people use the words, you're a moral relativist

Honestly I do not understand how you can continue calling Eliezer a relativist when he has persistently claimed that what is right doesn't depend on who's asking and doesn't depend on what anyone thinks is right.

Is anyone who does not believes in universally compelling arguments a relativist?

Is anyone who does not believe that morality is ontologically primitive a relativist?

Is anyone who does not believe that morality admits a concise description a relativist?

Honestly I do not understand how you can continue calling Eliezer a relativist when he has persistently claimed that what is right doesn't depend on who's asking and doesn't depend on what anyone thinks is right.

Which mean he isn't an individual-level relativist. He could still be a group level relativist,

To your first two questions, probably and yes.

[-]Roko00

@Ben Jones: "Say half the population takes a pill that makes them really, truly believe that murder is right. The way I understand Eliezer's assertion that his morals aren't relative, he'd say 'no, murder is still wrong', and would probably assert the same even if 100% of the population took the pills. The pill-takers would assert, and absolutely believe, that they were right. Not p-right, but right. I'd love to hear proof that the pill-takers are wrong, and that everyone else is right. Not p-right, but right."

This comment underlines the fact that Eliezer's use of language is not the standard one. According to Eliezer's usage, it is not possible to opine that murder is right, in the same way that it is not possible to opine that 22 is prime. Eliezer has defined "right" to be a specific constant set of terminal values. Unfortunately, he hasn't ever specified that constant fully, just like if I said "x is the real number 8.1939... etc". [What comes after the dots?]

Asking whether the pill-takers are "right" is, in Eliezer's terminology, like asking if x = 0, where x is simply defined to be the real number 8.1939.. etc: it's false in an obvious way. Nothing is right, except what Eliezer has in mind when he writes lists of values followed by "...", just like no real number is equal to 8.1939... etc, except, um, 8.1939... etc.

This is, of course, not the way most people use the word "right". They use it as the name of a variable to hold the answer to the question "how shall I live?".

Roko: It certainly is possible to opine that 22 is prime. Watch this:

22 is prime!

See, I did it. If you claim murder is right, then you aren't talking about something other than right, you are just making false statements about right.

Asking whether the pill-takers are "right" is, in Eliezer's terminology, like asking if x = 0, where x is simply defined to be the real number 8.1939.. etc: it's false in an obvious way. Nothing is right, except what Eliezer has in mind when he writes lists of values followed by "..."

That number could be given the name 'right', but right then has all other meaning stripped away from it. More importantly, the choice of number seems to be arbitrary.

Constants like e and pi are definable in terms of certain important properties that we discovered through analysis of problems that are often important in real-life situations. What makes right a special number in the same way that e and pi are special?

[-]Roko00

In fact, I have a good analogy for the naming problem that Eliezer has created: suppose we are physicists and we are trying to work out how fast light travels through free space. We take measurements of distance and time between various locations, and decide to denote the average of these measurements by the symbol "c".

Eliezer decides to start using the symbol "c" to denote the real number 3*10^8.

Ben Jones then asks whether c might actually equal 2*10^8.

Others asked whether c might equal 299,792 ,458. No! Of course it doesn't, says Eliezer: c is defined to be 300,000,000.


"c" --- "right" "300,000,000" --- "the output of the CEV algorithm"/"the average human moral viewpoint"

Some of these (e.g. Roko's) concerns might be clarified in terms of the distinctions between sense, reference, and reference-fixing descriptions. I take it Eliezer wants to use 'right' as a rigid designator to denote some particular set of terminal values, but others have pointed out that this reference fact is fixed by means of a seemingly 'relative' procedure (namely, whatever terminal values he himself happens to hold, on some appropriate [if somewhat mysterious] idealization). There is also some concern that this doesn't match the plain meaning or sense of the term 'right', as everyone else understands it.

Roko:

Eliezer decides to start using the symbol "c" to denote the real number 3*10^8.

No, he has continuously refused to spell out an explicit description of morality, because it admits no concise description. When Eliezer writes a list of values ending with "etcetera" he's saying (in your analogy) "c is 3*10^8, up to one significant digit".

Richard:

but others have pointed out that this reference fact is fixed by means of a seemingly 'relative' procedure

I think you are mixing meta-levels here. The seemingly relative procedure is used to describe morality in blog posts, not to chose what morality is in the first place.

Larry, no, the mix-up is yours. I didn't say anything about morality, I was talking about the word 'right', and the meta-semantic question how it is that this word refers to rightness (some particular combination of terminal values) rather than, say, p-rightness.

Richard: It seems to me that asking how is it that the word 'right' came to refer to rightness is like asking why 'green' means green, instead of meaning zebra.

The fact is that there is some concept that we've been calling "right", and though we don't exactly know what we mean by it, we're pretty certain it means something, and in some cases we know it when we see it.

It strikes me as unfair to accuse Eliezer of having his own private meaning of "right" that isn't in accordance with the common one, because hasn't endorsed a criterion or decision procedure for 'right', he hasn't tried to define it, he hasn't made clearly-wrong claims about it like "murder is right", he really hasn't said much of anything about the object-level practical meaning of 'right'. He has mostly just discussed certain meta-level features of the concept, such as the fact that isn't all-possible-minds-universal, and the idea that one who explicitly thinks "If i think X is right, then X is right" can think that anything is right.

the idea that one who explicitly thinks "If i think X is right, then X is right" can think that anything is right.
That's relativism, right there - the idea that rightness is not only socially determined, but individually socially determined.

It strikes me as unfair to accuse Eliezer of having his own private meaning of "right" that isn't in accordance with the common one
In what way? He has explicitly forwarded the idea that rightness can only be understood in relation to a particular moral code, with his talk of "p-rightness" and "e-rightness" and who knows what else.

That is incompatible with the common meaning of rightness.

Caledonian: That's relativism, right there - the idea that rightness is not only socially determined, but individually socially determined.

What!? That's just not what I said at all.

"asking how is it that the word 'right' came to refer to rightness is like asking why 'green' means green"

Yeah, that's not exactly what I meant. As I see it there are two stages: there's the question how the symbols 'right' (or 'green') get attached to the concept that they do, and then there's the more interesting question of how this broad sense of the term determines -- in combination with the actual facts -- what the term actually refers to, i.e. what property the concept denotes. So I should have asked how it is that our sense of the concept 'right', as we hold it in our minds, determines what external property is ultimately denoted by the term. (Compare how the concept 'water' ultimately denotes the property of being H2O.) It is this step of Eliezer's account, I think, which looks to some to be suspiciously relativistic, and in conflict with the sense of the term as they understand it. Maybe he's picking out the right property (hard to tell when he's said so little about it, as you say). But the meta-properties, the concept, the procedure by which what we have in mind picks out a particular thing in the word, that just seems all wrong.

We have already escaped moral-relativity; as evidence, I submit our current discussion. No self-optimizing decision procedure (e.g. for right, p-right, ...) can begin with nothing. Each one is a gift existence grants itself.

This talk about metaethics is trying to justify building castles in the clouds by declaring the foundation to be supported by the roof. It doesn't deal with the fundamental problem at all - it makes it worse.

Caledonian, I don't want to speak for Eliezer. But my contention, at least, is that the fundamental problem is insoluble. I claim, not that this particular castle has a solid foundation, but that there exist no solid foundations, and that anywhere you think you've found solid earth there's actually a cloud somewhere beneath it. The fact that you're reacting so strongly makes me think you're interpreting Eliezer as saying what I believe. Similarly,

Why should we care about a moral code that Eliezer has arbitrarily chosen to call right? What relevance does this have to anything?

There's no particular reason we should care about a moral code Eliezer has chosen. You should care about the moral code you have arbitrarily chosen. I claim, and I think Eliezer would too, that there will be a certain amount of overlap because you're both human (just as you both buy into Occam because you're both human). But we couldn't give, say, a pebblesorter any reason to care about Eliezer's moral code.

Larry D'ana: Is anyone who does not believes in universally compelling arguments a relativist?

Is anyone who does not believe that morality is ontologically primitive a relativist?

Yeah, pretty much.

If there are no universally compelling arguments, then there's no universally compelling moral code. Which means that whatever code compels you has to compel relative to who you are; thus it's a relativist position.

Eliezer tries to get around this by saying that he has this code he can state (to some low degree of precision), and everyone can objectively agree on whether or not some action comports with this code. Or at least that perfect Bayesian superintelligences could all agree. (I'm not entirely sold on that, but we'll stipulate). I claim, though, that this isn't the way most people (including most of us) use the words 'morality' and 'right'; I think that if you want your usage to comport with everyone else's, you would have to say that the pebblesorters have 'a' moral code, and that this moral code is "Stack pebbles in heaps whose sizes are prime numbers."

In other words, in general usage a moral code is a system of rules that compels an agent to action (and has a couple other properties I haven't figured out how to describe without self-reference). A moral absolutist claims that there exists such a system of rules that is rightly binding and compelling to all X, where X is usually some set like "all human beings" or "all self-aware agents." (Read e.g. Kant who claimed that the characteristic of a moral rule is that it is categorically binding on all rational minds). But Eliezer and I claim that there are no universally compelling arguments of any sort. Thus in particular there are no universally compelling injunctions to act, and thus no absolute moral code. Instead, the injunction to act that a particular agent finds compelling varies with the identity of the agent; thus 'morality' is relative to the agent. And thus I'm a moral relativist.

Now, it's possible that you could get away with restricting X to "human beings"; if you then claimed that humans had enough in common that the same moral code was compelling to all of them, you could plausibly reclaim moral objectivism. But I think that claim is clearly false; Eliezer seems to have rejected it (or at least refused to defend it) as well. So we don't get even that degree of objectivity; the details of each person's moral code depend on that person, and thus we have a relative standard. This is what has Caledonian's knickers in such a twist.

Kenny: exactly. That's why we're morally relative.

Honestly I do not understand how you can continue calling Eliezer a relativist when he has persistently claimed that what is right doesn't depend on who's asking and doesn't depend on what anyone thinks is right.

Before I say anything else I want you to know that I am not a Communist.

Marx was right about everything he wrote about, but he didn't know everything, I wouldn't say that Marx had all the answers. When the time is ripe the proletariat will inevitably rise up and create a government that will organize the people, it will put everybody to work according to his abilities and give out the results according to the needs, and that will be the best thing that ever happened to anybody. But don't call me a Communist, because I'm not one.

Oh well. Maybe Eliezer is saying something new and it's hard to understand. So we keep mistaking what he's saying for something old that we do understand.

To me he looks like a platonist. Our individual concepts of "right" are imperfect representations of the real true platonic "right" which exists independently of any or all of us.

I am more of a nominalist. I see our concepts as things that get continually re-created. We are born without any concept of "right" and we develop such concepts as we grow up, with the foundations in our families. The degree to which we develop similar concepts of "right" is a triumph for our societies. There's nothing inevitable about it, but there's a value to moral uniformity that goes beyond the particular beliefs.

So for example about "murder". Americans mostly believe that killing is sometimes proper and necessary. Killing in self defense. Policemen must sometimes kill dangerous criminals. It's vitally necessary to kill the enemy in wartime. Etc. We call it "murder" only when it is not justified, so of course we agree that murder is wrong.

We would be better off if we all agreed about when killing is "right". Is it right to kill adulterous spouses? The people they have sex with? Is it right to kill IRS agents? Blasphemers? Four years ago a man I met in a public park threatened to kill me to keep me from voting for Kerry. Was he right? Whatever the rules are about killing, if we all agreed and we knew where we stood, we'd be better off than when we disagree and don't know who to expect will try to kill us.

And that is why in the new society children will be taken from their parents and raised in common dormitories. Because individual families are too diverse, and they don't all raise their children to understand that "from each according to his abilities, and to each according to his needs" is the most basic and important part of morality.

But don't call me a Communist, I already explained that I wasn't a Communist in my first sentence above.

Eliezer's moral theory is Aristotelian, not Platonic. Plato believed that Forms and The Good existed in a separate realm and not in the real world; any triangle you drew was an approximation of The Triangle. Aristotle believed that Forms were generalizations of things that exist in the real world, and had no independent existence. The Triangle is that which is shared among all drawings of triangles; The Dog is that which is shared among all dogs.

Eliezer's moral theory, it seems to me, is that there is Rightness, but it is generalized from the internal sense of rightness that every human has. People may deviate from The Right, and could take murderpills to make everyone believe something which is Wrong is right, but The Right doesn't change; people would just go further out of correspondence with it.

[-][anonymous]10

Thing being, I don't even see the necessity for a Concept of Right, which can be generalized from real humans. You can rather dissolve the question. What is Right? That which gets us what we value, insofar as we value it, to the greatest degree possible, with a rational reflection to eliminate values whose implementation shows them to be internally contradictory, accounting for the diversity of others around us.

Even if everyone takes "murderpills", everyone wants to kill but nobody to be killed, so the implementation of the Value of Murder is internally contradictory to the degree that the anarchy, chaos and terror of a continuous murder spree would outweigh the value of the killings themselves for the killers -- particularly given that you can never be assured you're not next!

Right arrives to a sustainable long-term balance.

there is Rightness, but it is generalized from the internal sense of rightness that every human has.

...right now. It is not generalized, on this account, from the internal sense of rightness that every human will have in the future (say, after taking murder pills). Neither is it generalized from the internal sense of rightness that every human had in the past, supposing that was different.

You should care about the moral code you have arbitrarily chosen.
No, I shouldn't. Which seems to be the focal point of this endless 'debate'.

If there are no universally compelling arguments, then there's no universally compelling moral code.
The validity of a moral code is not based on whether it's possible to convince arbitrarily-chosen entities of its validity.

@J Thomas – We are born with some theorems of right (in analogy to PA). We are not blank slates. That is our escape hatch from the abyss of self-right (i.e. moral relativity). We have already been granted the gift of a (small) part of right. Again, it is not h-right, but right – just as it is not h-PA, but simply PA.

We are born with some theorems of right (in analogy to PA).

Kenny, I'd be fascinated to learn more about that. I didn't notice it in my children, but then I wouldn't necessarily notice.

When I was a small child people claimed that babies are born with only a fear of falling and a startle reflex for loud noises. I was pretty sure that was wrong, but it wasn't clear to me what we're born with. It takes time to learn to see. I remember when I invented the inverse square law for vision, and understood why things get smaller when they go farther away. It takes time to notice that parents have their own desires that need to be taken into account.

What is it that we're born with? Do you have a quick link maybe?

"You should care about the moral code you have arbitrarily chosen."

No, I shouldn't. Which seems to be the focal point of this endless 'debate'.

Well, you might choose to care about a moral code you have arbitrarily chosen. And it could be argued that if you don't care about it then you haven't "really" chosen it.

I agree with you that there needn't be any platonic absolute morality that says you ought choose a moral code arbitrarily and care about it, or that if you do happen to choose a moral code arbitrarily that you should then care about it.

Well, you might choose to care about a moral code you have arbitrarily chosen.

1) I don't think it's possible to choose in such a way - what I care about is not directly controllable by my conscious awareness. It is sometimes possible for me to set up circumstances so that my emotional responses are slowly directed in one way instead of another, but it's slow and chancey.

2) I assert that caring about arbitrarily-chosen stances is wrong.

I wouldn't describe my position as 'Platonic', but there is a limited degree of similarity. If there are no objective moral realities which we can attempt, however crudely and imperfectly, to model in our understanding, I assert that caring about moral stances is incorrect. That isn't what caring is for - self-referntially being concerned about our positions and inclinations is pointless if it makes no difference what position we choose.

Caledonian, it's possible to care deeply about choices that were made in a seemingly-arbitrary way. For example, a college graduate who takes a job in one of eight cities where he got job offers, might within the year care deeply about that city's baseball team. But if he had taken a different job it would be a completely different baseball team.

You might care about the result of arbitrary choices. I don't say you necessarily will.

It sounds like you're saying it's wrong to care about morals unless they're somehow provably correct? I'm not sure I get your objection. I want to point out that usually when we have a war, most of the people in each country choose sides based on which country they are in. Less than 50% of americans chose to oppose the current iraq fiasco before it happened. Imagine that russia had invaded iraq with the same pretexts we used, all of which would have worked as well for russia as well as they did for us. Russians had more reason than us to fear iraqi nukes, they didn't want iraq supporting chechen terrorists, they thought Saddam was bad man, etc. imagine the hell we would have raised about it.... But I contend that well over a hundred million americans supported the war for no better reason that they were born in america and so they supported invasions by the US military.

Whether or not there's some higher or plausible morality that says we should not choose our morals at random, still the fact is that most of us do choose our morals at random.

[-]ata90

But this does not constitute a disagreement between them and humans about what is right, any more than humans, in scattering a heap of 3 pebbles, are disagreeing with the Pebblesorters about which numbers are prime!

That is an excellent compression of the arguments against the idea that humans and Pebblesorters are actually disagreeing about anything. (I don't think I found that unclear at any point, but this sentence has a distinctly and pleasantly intuition-pumpy feel.)

general license to be human.

I don't understand what the phrase "a general license to be human" means in the context of these posts. Could someone please clarify?

I think it's meant to designate the idea that "whatever humans happen to prefer", aka "h-right," is in some way privileged.

The intention (I think) is something like "Just because humans prefer X, that's no reason we should attempt to maximize X." Human preferences are not licensed, in the sense of authorized or privileged.

(I agree with that, as far as it goes. Of course, the post seems to go on to say that what is actually licensed is what's right, and it so happens that (at least some) human preferences are right, so (at least some) human preferences happen to be licensed... but they are licensed because they are right, not because they are human. I get off the train sometime before it reaches that station.)

Caveat: Much of the metaethics sequence I either don't understand or disagree with, so I am far from presenting myself as an expert here. I answer the question as much in the hopes of getting corrected by others as anything else. Still, I haven't come up with another interpretation that makes nearly as much sense to me.

But this does not constitute a disagreement between them and humans about what is right, any more than humans, in scattering a heap of 3 pebbles, are disagreeing with the Pebblesorters about which numbers are prime!

Checking my understanding: The idea is that we can't disagree about "rightness" with pebblesorters, even if we both say "right", because the referant of the word is different, so we're not really talking about the same thing. Wheras with other humans the referant overlaps to a (large?) extent, so we can disagree about it to the extent that the referant overlaps.

(and our own map of that referant is both inaccurate and inconsistent between people, which is why there is disagreement about the overlapping portion)

Without having read further than this in the Sequences, I'm going to guess (assign X% probability?) that this comes back in future posts about AI, and that a large part of the FAI problem is "how to ensure the AI contains or relies on an accurate map of the referant of 'right', when we don't have such a map ourselves."

Checking my understanding: The idea is that we can't disagree about "rightness" with pebblesorters, even if we both say "right", because the referant of the word is different, so we're not really talking about the same thing. Wheras with other humans the referant overlaps to a (large?) extent, so we can disagree about it to the extent that the referant overlaps.

Yep

Does this sound a little less indefensible, if I mention that PA trusts only proofs from the PA axioms, not proofs from every possible set of axioms?

This makes me wonder if something interesting might be said about a system that does trust proofs from every possible set of axioms. Or a system that consists of every possible axiom.

It would be inconsistent, obviously, but what else?

[-][anonymous]00

And as for that value framework being valuable because it's human—why, it's just the other way around: humans have received a moral gift, which Pebblesorters lack, in that we started out interested in things like happiness instead of just prime pebble heaps.

And as for that value framework being p-valuable because it's Pebblesorter—why, it's just the other way around: Pebblesorters have received a p-moral gift, which humans lack, in that they started out interested in prime pebble heaps instead of just things like happiness.

It is only when you look out from within the perspective of morality, that it seems like a great wonder that natural selection could produce true friendship. And it is only when you look out from within the perspective of morality, that it seems like a great blessing that there are humans around to colonize the galaxies and do something interesting with them. From a purely causal perspective, nothing unlawful has happened.

It is only when you look out from within the perspective of p-morality, that it seems like a great wonder that natural selection could produce Pebblesorters. And it is only when you look out from within the perspective of p-morality, that it seems like a great blessing that there are Pebblesorters around to turn the galaxies into pebbles and sort them into prime-numbered heaps. From a purely causal perspective, nothing unlawful has happened.

But from a moral perspective, the wonder is that there are these human brains around that happen to want to help each other—a great wonder indeed, since human brains don't define rightness, any more than natural selection defines rightness.

But from a p-moral perspective, the p-wonder is that there are these Pebblesorter brains around that happen to want to sort pebbles into prime-numbered heaps—a great p-wonder indeed, since Pebblesorter brains don't define primeness, any more than natural selection defines primeness.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]00

.

[This comment is no longer endorsed by its author]Reply

This is now in the running for my favorite posts of the sequences.