You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TimS comments on Less Wrong views on morality? - Less Wrong Discussion

1 Post author: hankx7787 05 July 2012 05:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (145)

You are viewing a single comment's thread. Show more comments above.

Comment author: TimS 05 July 2012 05:15:39PM 0 points [-]

I suspect the OP is asking whether you are a moral realist or anti-realist.

Comment author: cousin_it 05 July 2012 05:16:53PM *  6 points [-]

Okay, is a paperclip maximizer a moral realist?

Comment author: Jack 05 July 2012 05:38:00PM *  10 points [-]

I see no reason to think a paperclip maximizer would need to have any particular meta-ethics. There are possible paperclip maximizers that are and one's that aren't. As rule of thumb, an agent's normative ethics, that is, what it cares about, be it human flourishing or paperclips does not logically constrain it's meta-ethical views.

Comment author: cousin_it 05 July 2012 06:19:10PM *  2 points [-]

That's a nice and unexpected answer, so I'll continue asking questions I have no clue about :-)

If metaethics doesn't influence paperclip maximization, then why do I need metaethics? Can we point out the precise difference between humans and paperclippers that gives humans the need for metaethics? Is it the fact that we're not logically omniscient about our own minds, or is it something deeper?

Comment author: Jack 05 July 2012 06:59:50PM 3 points [-]

Perhaps I misunderstood. There are definitely possible scenarios in which metaethics could matter to a paperclip maximizer. It's just that answering "what meta-ethics would the best paperclip maximizer have?" isn't any easier than answering "what is the ideal metaethics?". Varying an agent's goal structure doesn't change the question.

That said, if you think humans are just like paperclip maximizers except they're trying to maximize something else than you're already 8/10ths of the way to moral anti-realism (Come! Take those last two steps the water is fine!).

Of course it's also the case that meta-ethics probably matters more to humans than paperclip maximizers: In particular metaethics matters for humans because of individual moral uncertainty, group and individual moral change, differences in between individual moralities, and the overall complexity of our values. There are probably similar possible issues for paperclip maximizers-- like how should they resolve uncertainty over what counts as a paperclip or deal with agents that are ignorant of the ultimate value of paperclips-- and thinking about them pumps my anti-realist intuitions.

Comment author: Manfred 05 July 2012 09:47:20PM 1 point [-]

Is it the fact that we're not logically omniscient about our own minds, or is it something deeper?

Well, there's certainly that. Also, human algorithms for decision-making can feel different from simply looking up a utility - the algorithm can be something more like a "treasure map" for locating morality, looking out at the world in a way that can feel as if morality was a light shining from outside.

Comment author: TimS 05 July 2012 07:18:18PM 0 points [-]

Consider dealings with agents that have morals that conflict with your own. Obviously, major value conflicts preclude co-existence. Let's assume it is a minor conflict - Bob believes eating cow milk and beef at the same meal is immoral.

It is possible to develop instrumental or terminal values to resolve how much you tolerate Bob's different value - without reference to any meta-ethical theory. But I think that meta-ethical considerations play a large role in how tolerance of value conflict is resolved - for some people, at least.

Comment author: Vladimir_Nesov 05 July 2012 09:18:36PM *  1 point [-]

Obviously, major value conflicts preclude co-existence.

Not obvious. (How does this "preclusion" work? Is it the best decision available to both agents?)

Comment author: TimS 05 July 2012 11:26:11PM 0 points [-]

Well, if I don't include that sentence, someone nitpicks by saying:

How does one tolerate Hitler McHitler the murdering child-molester?

I was trying preempt by making it clear that McH gets imprisoned or killed, even by moral anti-realists (unless they are exceptionally stupid).

Comment author: hankx7787 05 July 2012 05:25:22PM *  -2 points [-]

I would certainly say a paperclip maximizer morality falls in the former camp (objective, able to be scientifically investigated, real), although I'm not intimately familiar with the realist/anti-realist terminology.

Comment author: TimS 05 July 2012 05:49:10PM 1 point [-]

Hank, why would Clippy believe that maximizing paperclips is based on something external to its own mind? Clippy could just as easily be programmed to desire staples, and Clippy is probably intelligent enough to know that.

That said, I think Jack's general point about the relationship between ethics and meta-ethics is probably right.

Comment author: hankx7787 05 July 2012 06:01:51PM *  -1 points [-]

Presumably Clippy has a hard-coded utility function sitting in his source code somewhere. It's a real set of 0s and 1s sitting on a disk somewhere, and we could open the source file and investigate the code.

Clippy's value system is a specific objective, pre-programmed utility function that's inherent in his design and independent/prior to any knowledge or other cognitive content Clippy eventually gains or invents.

And yes, it could have been easily changed (until such a time as Clippy is all grown up and would prevent such change) to make him desire staples. But then that's a different design and we'd probably better call him Stapley at that point.

Comment author: TimS 05 July 2012 07:04:36PM 1 point [-]

Ideally, one would like objective facts to be universally compelling. If Clippy shows its source code to me, or to another AGI, neither of us would update in favor of believing that paper-clip maximizing is an appropriate terminal value.

Comment author: hankx7787 05 July 2012 11:44:02PM *  -2 points [-]

Ah, no, I don't mean "objective morality" in the sense of something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean what I said above, something in reality that's mind-independent and can be investigated scientifically - a definite "is" from which we can make true "ought" statements relative to that "is".

See drethelin's comment below.

Comment author: TimS 05 July 2012 11:50:53PM *  2 points [-]

Hank, I definitely don't think there's any activity that (1) can reasonably be labeled "scientific investigation" and (2) can solve the is-ought divide.

Comment author: hankx7787 06 July 2012 12:12:43AM 0 points [-]

I didn't think you would :) I'm curious about the consensus on LW, though. But incidentally, what do you think of Thou Art Godshatter?

Comment author: TimS 06 July 2012 01:16:43AM 0 points [-]

First, that essay is aimed primarily at those who think dualism is required in order to talk about morality at all - obviously that's not the discussion we are having.

Second, the issue is not whether there are (1) universal (2) morally relevant (3) human preferences that (4) have been created by evolution. The answer to that question is yes (i.e. hunger, sexual desire). But that alone does not show that there is a universal way for humans to resolve moral dilemma.

If we study and quantify the godshatter to the point that we can precisely describe "human nature," we aren't guaranteed in advance to know that appeals to human nature will resolve every moral dilemma. If reference to human nature doesn't, then evolutionary preference doesn't prove moral realism.

Comment author: buybuydandavis 06 July 2012 12:48:09AM 2 points [-]

something in reality that's mind-independent and can be investigated scientifically

Clippy's code is his mind.

Comment author: Jack 06 July 2012 12:26:53AM 2 points [-]

No. The physical instantiation of a utility function is not an argument for moral realism. On the complete contrary, defining moral actions as "whatever an agents utility function says" is straight-forward, definitional, no-bones-about-it moral subjectivism.

Put it this way: the paperclip maximizer is not going to approve of your behavior.