You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Why didn't people (apparently?) understand the metaethics sequence? - Less Wrong Discussion

12 Post author: ChrisHallquist 29 October 2013 11:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (229)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 04 November 2013 05:02:47PM *  1 point [-]

I find moral realism meaningful for each individual (you can evaluate choices according to my values applied with infinite information and infinite resources to think),

I can how that could be implemented. However, I don't see how that would count as morality. It amounts to Anything Goes, or Do What Thou Wilt. I don't see how a world in which that kind of "moral realism" holds would differ from one where moral subjectivism holds, or nihilism for that matter.

but I don't find it meaningful when applied to groups of people, all with their own values.

Where meaningful means implementable? Moral realism is not many things, and one of the things it is not is the claim that everyone gets to keep all their values and behaviour unaltered.

Comment author: [deleted] 10 November 2013 02:42:01PM *  0 points [-]

Not "anything goes, do what you will", so much as "all X go, X is such that we want X before we do it, we value doing X while we are doing it, and we retrospectively approve of X after doing it".

We humans have future-focused, hypothetical-focused, present-focused, and past-focused motivations that don't always agree. CEV (and, to a great extent, moral rationality as a broader field) is about finding moral reasoning strategies and taking actions such that all those motivational systems will agree that we Did a Good Job.

That said, being able to demonstrate that the set of Coherently Extrapolated Volitions exists is not a construction showing how to find members of that set.

Comment author: TheAncientGeek 11 November 2013 06:00:06PM 3 points [-]

Not "anything goes, do what you will", so much as "all X go, X is such that we want X before we do it, we value doing X while we are doing it, and we retrospectively approve of X after doing it".

As with a number of previous responses, that is ambiguous between the individual and the collective. If I could get some utility by killing you, then should I kill you? If the "we" above is interpreted individually, I should: if it is interpreted collectively, I shouldn't.

Comment author: [deleted] 12 November 2013 10:14:07AM 1 point [-]

Yes, that is generally considered the core open problem of ethics, once you get past things like "how do we define value" and blah blah blah like that. How do I weigh one person's utility against another person's? Unless it's been solved and nobody told me, that's a Big Question.

Comment author: TheAncientGeek 12 November 2013 01:43:15PM *  2 points [-]

So...what's the point of CEV, hten?

Comment author: [deleted] 12 November 2013 07:28:13PM *  1 point [-]

It's a hell of a lot better than nothing, and it's entirely possible to solve those individual-weighting problems, possibly by looking at the social graph and at how humans affect each other. There ought to be some treatment of the issue that yields a reasonable collective outcome without totally suppressing or overriding individual volitions.

Certainly, the first thing that comes to mind is that some human interactions are positive sum, some negative sum, some zero-sum. If you configure collective volition to always prefer mutually positive-sum outcomes over zero-sum over negative, then it's possible to start looking for (or creating) situations where sinister choices don't have to be made.

Comment author: TheAncientGeek 12 November 2013 09:28:55PM 0 points [-]

Who said the alternative is nothing? Theres any number of theories of morality, and a further number of theories of friendly .ai.

Comment author: buybuydandavis 05 November 2013 12:21:18AM 0 points [-]

However, I don't see how that would count as morality.

See my previous coment on "Real Magic": http://lesswrong.com/lw/tv/excluding_the_supernatural/79ng

If you choose not to count the actual moralities that people have as morality, that's up to you.