Values (utilities, goals, etc) are arational. Rationality, LW or otherwise, has nothing to say about "correctness" of terminal values. (Epistemic rationality - the study of how to discover objective truth - is valuable for most actual values which reference the objective, real world; but it is still only a tool, not necessarily valued for itself.)
Many LW posters and readers share some values, including human life; so we find it productive to discuss it. But no-one can or will tell you that you should or ought to have that value, or any other value - except as an instrumental sub-goal of another value you already have.
Your expression, "inherent values", is at best confusing. Values cannot be attributes purely of the valued things; they are always attributes of the tuple (valued thing, valuing agent). It doesn't make sense to say they are "inherent" in just one of those two parts.
Now, if you ask why many people here share this value, the answers are going to be of two kinds. First, why people in general have a high likelihood of holding this value. And second, whether this site tends to filter or select people based on their holding this value, and if so how and why it does that. These are important, deep, interesting questions that may allow for many complex answers, which I'm not going to try to summarize here. (A brief version, however, is that people care more about other people than about paperclips, because other people supply or influence almost all that a person tends to need or want in life, while paperclips give the average person little joy. I doubt that's what you're asking about.)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
We're humans, so we maximize human utility. If squirrels were building AIs, they ought to maximize what's best for squirrels.
There's nothing inherently better about people vs paperclips vs squirrels. But since humans are making the AI, we might as well make it prefer people.
That's one element in what started my line of thought..I was imagining situations where I would consider the exchange of human lives for non-human objects. How many people's lives would be a fair exchange for a pod of bottlenose dolphins? A West Virginia mountaintop? An entire species of snail?
I think what I'm getting towards is there's a difference between human preferences and human preference for other humans. And by human preferences, I mean my own.