You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Clarity comments on We really need a "cryonics sales pitch" article. - Less Wrong Discussion

10 Post author: CronoDAS 03 August 2015 10:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dagon 07 August 2015 09:16:04PM 0 points [-]

To what extent should an agent's utility definition extend beyond their own person?

I'm not sure how to evaluate "should" in the question, but most people I know (including myself) "do" include events they'll never directly perceive in their decisions.

Personally, I recognize that some of my current happiness and motivation is based on imagining potential future events that I think are exceedingly unlikely for me to actually experience. I make decisions based on likely impact on others outside of my perception-cone, such as strangers I'll never meet or interact with, and who may well be figments of the mass-media's imagination.

Whether these un-meetable person-placeholders in my imagined decision-consequence timeline are contemporaneous but physically removed, or distantly removed in time is kind of irrelevant.

Comment author: Clarity 09 August 2015 01:52:05AM 0 points [-]

I wonder what this philosophical stance is called?