Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Yvain2 comments on No License To Be Human - Less Wrong

17 Post author: Eliezer_Yudkowsky 20 August 2008 11:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Yvain2 21 August 2008 12:22:45PM 14 points [-]

I was one of the people who suggested the term h-right before. I'm not great with mathematical logic, and I followed the proof only with difficulty, but I think I understand it and I think my objections remain. I think Eliezer has a brilliant theory of morality and that it accords with all my personal beliefs, but I still don't understand where it stops being relativist.

I agree that some human assumptions like induction and Occam's Razor have to be used partly as their own justification. But an ultimate justification of a belief has to include a reason for choosing it out of a belief-space.

For example, after recursive justification hits bottom, I keep Occam and induction because I suspect they reflect the way the universe really works. I can't prove it without using them. But we already know there are some things that are true but can't be proven. I think one of those things is that reality really does work on inductive and Occamian principles. So I can choose these two beliefs out of belief-space by saying they correspond to reality.

Some other starting assumptions ground out differently. Clarence Darrow once said something like "I hate spinach, and I'm glad I hate it, because if I liked it I'd eat it, and I don't want to eat it because I hate it." He's was making a mistake somewhere! If his belief is "spinach is bad", it probably grounds out in some evolutionary reason like insufficient energy for the EEA. But that doesn't justify his current statement "spinach is bad". His real reason for saying "spinach is bad" is that he dislikes it. You can only choose "spinach is bad" out of belief-space based on Clarence Darrow's opinions.

One possible definition of "absolute" vs. "relative": a belief is absolutely true if people pick it out of belief-space based on correspondence to reality; if people pick it out of belief-space based on other considerations, it is true relative to those considerations.

"2+2=4" is absolutely true, because it's true in the system PA, and I pick PA out of belief-space because it does better than, say, self-PA would in corresponding to arithmetic in the real world. "Carrots taste bad" is relatively true, because it's true in the system "Yvain's Opinions" and I pick "Yvain's Opinions" out of belief-space only because I'm Yvain.

When Eliezer say X is "right", he means X satisfies a certain complex calculation. That complex calculation is chosen out of all the possible complex-calculations in complex-calculation space because it's the one that matches what humans believe.

This does, technically, create a theory of morality that doesn't explicitly reference humans. Just like intelligent design theory doesn't explicitly reference God or Christianity. But most people believe that intelligent design should be judged as a Christian theory, because being a Christian is the only reason anyone would ever select it out of belief-space. Likewise, Eliezer's system of morality should be judged as a human morality, because being a human is the only reason anyone would ever select it out of belief-space.

That's why I think Eliezer's system is relative. I admit it's not directly relative, in that Eliezer isn't directly picking "Don't murder" out of belief-space every time he wonders about murder, based only on human opinion. But if I understand correctly, he's referring the question to another layer, and then basing that layer on human opinion.

An umpire whose procedure for making tough calls is "Do whatever benefits the Yankees" isn't very fair. A second umpire whose procedure is "Always follow the rules in Rulebook X" and writes in Rulebook X "Do whatever benefits the Yankees" may be following a rulebook, but he is still just as far from objectivity as the last guy was.

I think the second umpire's call is "correct" relative to Rulebook X, but I don't think the call is absolutely correct.

Comment author: VAuroch 28 November 2013 09:23:07PM 1 point [-]

A way to justify Occam and Induction more explicitly is by appealing to natural selection. Take large groups of anti-inductor anti-occamians and occamian inductors, and put them in a partially-hostile environment. The Inductors will last much longer. Now, possibly the quality of maximizing inclusive fitness is somehow based on induction or Occam's Razor, but in a lawful universe it will usually be the case that the inductor wins.