RobinHanson comments on Reflections on Pre-Rationality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (30)
If you owned any slave and could cheaply do so, you'd want to mold it to share exactly your preferences. But should you treat your future selves as your slaves?
Upon further reflection, I think altruism towards one's future selves can't justify having different preferences, because there should be a set of compromise preferences such that both your current self and your future selves are better off if you bind yourself (both current and future) to that set.
The logical structure of this argument is flawed. Here's another argument that shares the same structure, but is clearly wrong:
Here's another version that makes more sense:
One answer here might be that changing your friend's preferences is a wrong because it hurts him according to his current preferences. Doing the same to your future selves isn't wrong because they don't exist yet. But I think Robin's moral philosophy says that we should respect the preferences of nonexistent people, so his position seems consistent with that.
This seems like the well-worn discussion on whether rational agents should be expected to change their preferences. Here's Omohundro on the topic:
"Their utility function will be precious to these systems. It encapsulates their values and any changes to it would be disastrous to them. If a malicious external agent were able to make modifications, their future selves would forevermore act in ways contrary to their current values. This could be a fate worse than death! Imagine a book loving agent whose utility function was changed by an arsonist to cause the agent to enjoy burning books. Its future self not only wouldn’t work to collect and preserve books, but would actively go about destroying them. This kind of outcome has such a negative utility that systems will go to great lengths to protect their utility functions."
He goes on to discuss the issue in detail and lists some exceptional cases.