Elia_G comments on Serious Stories - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (100)
That may be the only reason we evolved happiness or pleasure, but we don't have to care about what evolution optimized for, when designing a utopia. We're allowed to value happiness for its own sake. See Adaptation-Executers, not Fitness-Maximizers.
Worthwhile goals are finite, so it's true we might run out of goals someday, and from then on be bored. But it doesn't frighten me too much because:
We're not going to run out of goals as soon as we create an AI that can achieve them for us; we can always tell it to let us solve some things on our own, if it's more fun that way.
The space of worthwhile goals is still ridiculously big. To live a life where I accomplish literally everything I want to accomplish is good enough for me, even if that life can't be literally infinite.* Plus, I'm somewhat open to the idea of deleting memories/experience in order to experience the same thing again.
There's other fun things to do that don't involve achieving goals, and that aren't used up when you do them.
*Actually, I am a little worried about a situation where the stronger and more competent I get, the quicker I run out of life to live... but I'm sure we'll work that out somehow.
I guess technically the real goal is to be "close to perfection", as close as possible. We pretend that the goal is "perfection" for ease of communication, and because (as imperfect humans) we can sometimes trick ourselves into achieving more by setting our goals higher than what's really possible.