Okay. By saying "If they have failed to grasp that morality is obligatory, have they understood it at all? They might continue caring more about eggnog, of course. That is beside the point... morality means what you should care about, not what you happen to do."
it seems you have not understood the idea. Were there any parts of the the post that seemed unclear that you think I might make clearer?
Because the whole point is that to say something is moral = you should do it = it is valued according to the morality equation.
For an Elf to agree something is moral is also to agree that they should do it. When I say they agree it's moral and don't care, that also means they agree they should do it and don't care.
Something being Christmas Spiritey = you Spiritould do it. Humans might agree that something is Christmas Spirit-ey, and agree that they spiritould do it, they just don't care about what they spiritould do, they only care about what they should do.
moral is to Christmas spiritey what "should" is to (make up a word like) "spiritould"
Obligatory is just a kind of "should." Elves agree that some things are obligatory, and don't care, they care about what's ochristmastory.
.
Likewise, to say that today's morality equation is the "best" is to say that today's morality equation is the equation which is most like today's morality equation. Tautology.
Best = most good, and good = valued by the morality equation.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I have no idea what you are talking about. Optimization isn't that vague of a word, and I tried to give examples of what I meant by it. The ability to solve problems and design technologies. Dogs and cats can't design technology. Blue and green can't design technology. Call it what you want, but to me that's what intelligence is.
And that's all that really matters about intelligence, is it's ability to do that. If you gave me a computer program that could solve arbitrary optimization problems, who cares if it can't speak language? Who cares if it isn't an agent? It would be enormously powerful and useful.
Again this claim doesn't follow from your premise at all. AIs will be programmed to understand language... therefore they won't have goals? What?
Humans definitely have goals. We have messy goals. Nothing explicit like maximizing paperclips, but a hodge podge of goals that evolution selected for, like finding food, getting sex, getting social status, taking care of children, etc. Humans are also more reinforcement learners than pure goal maximizers, but it's the same principle.
What I am saying that being enormously powerful and useful does not determine the meaning of a word. Yes, something that optimizes can be enormously useful. That doesn't make it intelligent, just like it doesn't make it blue or green. And for the same reason: neither "intelligent" nor "blue" means "optimizing." And your case of evolution proves that; evolution is not intelligent, even though it was enormously useful.
"This claim doesn't follow from your premise at all." Not as a logical deduction, but in the sense that if you pay attention to what I was talking about, you can see that it would be true. For example, precisely because they have general knowledge, human beings can pursue practically any goal, whenever something or someone happens to persuade them that "this is good." AIs will have general knowledge, and therefore they will be open to pursuing almost any goal, in the same way and for the same reasons.