Manfred comments on Life is Good, More Life is Better - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (54)
I suppose you could try the abstract route. What sort of properties would cause a utility-maximizing agent to be okay with dying? What sort of utility functions could lead to an agent choosing, say, $500 and a 100 year lifespan over immortality? What sort of agent could extract an infinite amount of utility from living an infinite life? What sort of agent would only get a finite amount of utility from a finite life?
These problems are a bit tricky.
And then of course the subjective part. Which agent are you most like?