Kaj_Sotala comments on An attempt to dissolve subjective expectation and personal identity - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (68)
Hmm, looks like I expressed myself badly, as several people seem to have this confusion. I wasn't saying that long-term optimization problems in general would require a sense of identity, just that the specific optimization program that's implemented in our current mental architecture seems to require it.
(Yes, a utilitarian could in principle decide that they want to minimize the amount of suffering in the world and then do a calculation about how to best achieve that which didn't refer to a sense of identity at all... but they'll have a hard time getting themselves to actually take action based on that calculation, unless they can somehow also motivate their more emotional predictive systems - which are based on a sense of personal identity - to also be interested in pursuing those goals.)