You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

buybuydandavis comments on Rationality Reading Group: Part X: Yudkowsky's Coming of Age - Less Wrong Discussion

5 Post author: Gram_Stone 06 April 2016 11:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1)

You are viewing a single comment's thread.

Comment author: buybuydandavis 12 April 2016 02:56:14AM 1 point [-]

That Tiny Note of Discord

Maybe some people would prefer an AI do particular things, such as not kill them, even if life is meaningless.

I see two transitions here.

First, instead of talking about what "is" right, it's now about some people prefer. We're not talking about a disembodied property of rightness, nor are we talking about rightness that people as a type prefer, but we're talking about what some people actually prefer. We're thinking about a subset of an actual population of beings, and what they do, and we're not assuming that they're all identical in what they do.

The move to trace concepts back to actual concretes is a winner. Values disconnected from Valuers is a loser.

Second, even if life is meaningless by some conception of meaning, life still goes on, and people will still have preferences. The problem of fulfilling their preferences remains, even if we decide that there "is no meaning" to life. In fact, the problem is there even if we decide that there is meaning to life, because then the question of the relation of me satisfying my preferences vs. this ethereal "meaning" naturally arises.

On the question of meaning, EY was doing a classic, we can't have X without Y, therefore we assume Y, where Y is a meaning to life. But notice that does not establish what we have Y, or that we even particularly need Y for anything that we want, as this doesn't establish that we actually need X either. You can find that neither X nor Y amount to coherent concepts, and that pesky problem of satisfying preferences remains.

Between Stirner and Korzybski, I think you have a cure for most of the conceptual confusion around morality, and you won't find yourself making early EY's mistakes.