byrnema comments on Strong moral realism, meta-ethics and pseudo-questions. - Less Wrong

18 [deleted] 31 January 2010 08:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: Psy-Kosh 01 February 2010 03:31:30AM 11 points [-]

How about something like this: There's a certain set of semi abstract criteria that we call 'morality'. And we happen to be the sorts of beings that (for various reasons) happen to care about this morality stuff as opposed to caring about something else. should we care about morality? Well, what is meant by "should"? It sure seems like that's a term that we use to simply point to the same morality criteria/computation. In other words, "should we care about morality" seems to translate to "is it moral to care about morality" or "apply morality function to 'care about morality' and check the output"

It would seem also that the answer is yes, it is moral to care about morality.

Some other creatures might somewhere care about something other than morality. That's not a disagreement about any facts or theory or anything, it's simply that we care about morality and they may care about something like "maximize paperclip production" or whatever.

But, of course, morality is better than paper-clip-ality. (And, of course, when we say "better", we mean "in terms of those criteria we care about"... ie, morality again.)

It's not quite circular. Us and the paperclipper creatures wouldn't really disagree about anything. They'd say "turning all the matter in the solar system into paperclips is paperclipish", and we'd agree. We'd say "it's more moral not to do so", and they'd agree.

The catch is that they don't give a dingdong about morality, and we don't give a dingdong about paperclipishness. And indeed that does make us better. And if they scanned our minds to see what we mean by "better", they'd agree. But then, that criteria that we were referring to by the term "better" is simply not something the paperclippers care about.

"we happen to care about it" is not the justification. It's moral is the justification. It's just that our criteria for valid moral justification is, well... morality. Which is as it should be. etc etc.

Morality is seems to be an objective criteria. Actions can be judged good or bad in terms of morality. We simply happen to care about morality instead of something else. And this is indeed a good thing.

Comment author: byrnema 01 February 2010 04:02:53AM *  9 points [-]

I don't understand two sentences in a row. Not here, not in the meta-ethics sequence, not anywhere where you guys talk about morality.

I don't understand why I seem to be cognitively fine on other topics on Less Wrong, but then all of a sudden am Flowers for Algernon here.

I'm not going to comment anymore on this topic; it just so happens meta-morality or meta-ethics isn't something I worry about anyway. But I would like to part with the admonition that I don't see any reason why LW should be separating so many words from their original meanings -- "good", "better", "should", etc. It doesn't seem to be clarifying things even for you guys.

I think that when something is understood -- really understood -- you can write it down in words. If you can't describe an understanding, you don't own it.

Comment author: Psy-Kosh 01 February 2010 04:17:35AM 2 points [-]

Huh? I'm asserting that most people, when they use words like "morality", "should"(in a moral context), "better"(ditto), etc, are pointing at the same thing. That is, we think this sort of thing partly captures what people actually mean by the terms. Now, we don't have full self knowledge, and our morality algorithm hasn't finished reflecting (that is, hasn't finished reconsidering itself, etc), so we have uncertainty about what sorts of things are or are not moral... But that's a separate issue.

As far as the rest... I'm pretty sure I understand the basic idea. Anything I can do to help clarify it?

How about this: "morality is objective, and we simply happen to be the sorts of beings that care about morality as opposed to, say, evil psycho alien bots that care about maximizing paperclips instead of morality"

Does that help at all?