Comment author: DanArmak 12 October 2016 02:55:20PM 3 points [-]

Without commenting on whether this presentation matches the original metaethics sequence (with which I disagree), this summary argument seems both unsupported and unfalsifiable.

  1. No evidence is given for the central claim, that humans can and are converging towards a true morality we would all agree about if only we understood more true facts.
  2. We're told that people in the past disagreed with us about some moral questions, but we know more and so we changed our minds and we are right while they were wrong. But no direct evidence is given for us being more right. The only way to judge who's right in a disagreement seems to be "the one who knows more relevant facts is more right" or "the one who more honestly and deeply considered the question". This does not appear to be an objectively measurable criterion (to say the least).
  3. The claim that ancients, like Roman soldiers, thought slavery was morally fine because they didn't understand how much slaves suffer is frankly preposterous. Roman soldiers (and poor Roman citizens in general) were often enslaved, and some of them were later freed (or escaped from foreign captivity). Many Romans were freedmen or their descendants - some estimate that by the late Empire, almost all Roman citizens had at least some slave ancestors. And yet somehow these people, who both knew what slavery was like and were often in personal danger of it, did not think it immoral, while white Americans in no danger of enslavement campaigned for abolition.
Comment author: hairyfigment 14 October 2016 11:09:12AM 1 point [-]

I'm getting really sick of this claim that Eliezer says all humans would agree on some morality under extrapolation. That claim is how we get garbage like this. At no point do I recall Eliezer saying psychopaths would definitely become moral under extrapolation. He did speculate about them possibly accepting modification. But the paper linked here repeatedly talks about ways to deal with disagreements which persist under extrapolation:

In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted. (emphasis added)

Coherence is not a simple question of a majority vote. Coherence will reflect the balance, concentration, and strength of individual volitions. A minor, muddled preference of 60% of humanity might be countered by a strong, unmuddled preference of 10% of humanity. The variables are quantitative, not qualitative.

(Naturally, Eugine Nier as "seer" downvoted all of my comments.)

The metaethics sequence does say IMNSHO that most humans' extrapolated volitions (maybe 95%) would converge on a cluster of goals which include moral ones. It furthermore suggests that this would apply to the Romans if we chose the 'right' method of extrapolation, though here my understanding gets hazier. In any case, the preferences that we would loosely call 'moral' today, and that also survive some workable extrapolation, are what I seem to mean by "morality".

One point about the ancient world: the Bhagavad Gita, produced by a warrior culture though seemingly not by the warrior caste, tells a story of the hero Arjuna refusing to fight until his friend Krishna convinces him. Arjuna doesn't change his mind simply because of arguments about duty. In the climax, Krishna assumes his true form as a god of death with infinitely many heads and jaws, saying, 'I will eat all of these people regardless of what you do. The only deed you can truly accomplish is to follow your warrior duty or dharma.' This view seems plainly environment-dependent.

Comment author: CCC 14 October 2016 10:30:22AM 0 points [-]

What they did was clearly wrong... but, at the same time, they did not know it, and that has relevance.

Consider; you are given a device with a single button. You push the button and a hamburger appears. This is repeatable; every time you push the button, a hamburger appears. To the best of your knowledge, this is the only effect of pushing the button. Pushing the button therefore does not make you an immoral person; pushing the button several times to produce enough hamburgers to feed the hungry would, in fact, be the action of a moral person.

The above paragraph holds even if the device also causes lightning to strike a different person in China every time you press the button. (Although, in this case, creating the device was presumably an immoral act).

So, back to the babyeaters; some of their actions were immoral, but they themselves were not immoral, due to their ignorance.

Comment author: hairyfigment 14 October 2016 10:37:41AM 2 points [-]

Clearly I should have asked about actions rather than people. But the Babyeaters were not ignorant that they were causing great pain and emotional distress. They may not have known how long it continued, but none of the human characters IIRC suggested this information might change their minds. Because those aliens had a genetic tendency towards non-human preferences, and the (working) society they built strongly reinforced this.

Comment author: MrMind 14 October 2016 09:13:26AM 0 points [-]

That's the basic, some say the only, mystery of MWI: why the world operates according to subjective probability?
You'll find this question posed in the Sequence in some places.

In response to comment by MrMind on Quantum Bayesianism
Comment author: hairyfigment 14 October 2016 10:15:08AM 0 points [-]

No, that is not the question I asked. The question I asked was what the god-damned imaginary numbers mean, if they aren't describing reality. Because they don't look like subjective probability.

Comment author: TheAncientGeek 13 October 2016 01:32:08PM *  1 point [-]

I see morality as fundamentally a way of dealing with conflicts between values/goals, so I cant answer questions posed in terms of "our values", because I don't know whether that means a set of identical values, a set of non-identical but non conflicting values, or a set of conflicting values. One of the implications of that view is that some values/goals are automatically morally irrelevant , since they can be satisfied without potential conflict. Another implication is that my view approximates to "morality is society's rules", but without the dismissive implication..if a society as gone through a process of formulating rules that are effective at reducing conflict, then there is a non-vacuous sense in which that society's morality is its rules. Also AI and alien morality are perfectly feasible, and possibly even necessary.

Comment author: hairyfigment 13 October 2016 08:02:28PM 0 points [-]

Were the Babyeaters immoral before meeting humans?

If not, what would you like to call the thing we actually care about?

Comment author: ChristianKl 11 October 2016 01:35:37PM 0 points [-]

Why do you think that Newtons proposal of his method of science had something to do with desire for a secular ruler?

Comment author: hairyfigment 11 October 2016 10:40:34PM 0 points [-]

Why do you think Newton's focus on new observations/experiments came from Cartesian ontology, when Newton doesn't wholly buy that ontology?

I'm saying the popes inadvertently created a separate concept of secular aspirations - often opposed to religious authorities, though not to God if he turns out to exist. This "imperial role" business is arguably a rival form of the idea, though Newton did in fact work for the Crown.

In response to Quantum Bayesianism
Comment author: TheAncientGeek 09 October 2016 08:25:34PM 1 point [-]

Q: Quantum. Bayesianism isn't the LessWrong official preferred interpretation of QM because....?

Comment author: hairyfigment 11 October 2016 02:40:59AM *  0 points [-]

Eliezer and E.T. Jaynes strongly urge seeing probabilities as subjective degrees of certainty that follow fixed laws (an extension of logic). If QBism is supposed to be compatible with this view - and yet not a form of MWI - then where do the complex numbers come from? Do they represent the map or the territory?

Comment author: ChristianKl 10 October 2016 01:55:57PM 0 points [-]
Comment author: hairyfigment 11 October 2016 02:03:56AM 0 points [-]

Even there, someone points out that Bacon wasn't big on math. I'll grant you I should give him more credit for a sensible conclusion on heat, and for encouraging experiments.

Comment author: ChristianKl 10 October 2016 09:17:40AM 1 point [-]

thereby creating a clearer distinction between religious and secular.

Given that Newton was a person who cared about the religious that would be a bad example. He spent a lot of time with biblical chronology.

You claimed that science wouldn't have been invented at the time without Newton. It's historically no accident that Leibniz discovered calculus independently from Newton. The interest in numerical reasoning was already there.

To get back to the claim, following the scientific method and explicitly writing it down are two different activities. It takes time to move from the implicit to the explicit.

Comment author: hairyfigment 11 October 2016 01:51:03AM 0 points [-]

But Newton didn't propose a religious method for science, which is my point. Did you think I meant that the popes turned Dante atheist? What they did was give him a desire for a secular ruler and an "almost messianic sense of the imperial role".

That sort of thinking may have given rise to Descartes' science fiction, so to speak - secular aspirations which go beyond even a New Order of the Ages. So there are a few possible prerequisites for a scientific method. As for someone else writing one down, maybe; what we observe is that the best early formulation came from a brilliant freak.

Comment author: username2 09 October 2016 09:00:43PM 0 points [-]

So, this criticism is implying that simply understanding human speech will (at a minimum) let the AI understand moral philosophy, which is not trivial.

I don't disagree with the other stuff you said. But I interpreted the criticism as "an AI told to 'do what humans want, not what they mean'" will have approximately the same effect as if you told a perfectly rational human being to do the same. So in the same way that I can instruct people with some success to "do what I mean", the same will work for AI too. It's just also true that this isn't a solution to FAI any more than it is with humans -- because morality is inconsistent, human beings are inherently unfriendly, etc...

Comment author: hairyfigment 10 October 2016 01:46:54AM 0 points [-]

I think you're eliding the question of motive (which may be more alien for an AI). But I'm glad we agree on the main point.

Comment author: ChristianKl 09 October 2016 10:20:18AM 0 points [-]

The main point is that if you buy the philosophic commitments of Descartes the hypothetico-deductive method is a straightforward conclusion. Newton might have expressed the method more clearly but various people moved in that directions once Descartes successfully argued against the old way.

Comment author: hairyfigment 10 October 2016 01:39:19AM 0 points [-]

Possibly, but I wouldn't say the popes started science by being terrible rulers, thereby creating a clearer distinction between religious and secular.

View more: Next