You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Clarity comments on We really need a "cryonics sales pitch" article. - Less Wrong Discussion

10 Post author: CronoDAS 03 August 2015 10:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread.

Comment author: Clarity 07 August 2015 01:21:40PM 0 points [-]

How about arguments from analogy that are broadly evocative.

The underlying question seems to be:

To what extent should an agent's utility definition extend beyond their own person?

A ready example would be effective altruism - should an effective altruism care about bequesting their fortune after death, given that they are not around to process the outcome? Intuitively, subcultural conditioning would bring most people to say yes, but how about if I turned it around? For example, a suicidal woman may strongly advocate for a right to suicide. Would she be maximising her utility to publish a note to the effect of calling on potential like-minded suicidees to kill anti-suicide policy-makers/politicians before they kill themselves, in order to pressure them to change their stance and raise awareness for dying with dignity? The post-death value maximising approach should be consistent in both the EA and suicide example, I should think.

Comment author: Dagon 07 August 2015 09:16:04PM 0 points [-]

To what extent should an agent's utility definition extend beyond their own person?

I'm not sure how to evaluate "should" in the question, but most people I know (including myself) "do" include events they'll never directly perceive in their decisions.

Personally, I recognize that some of my current happiness and motivation is based on imagining potential future events that I think are exceedingly unlikely for me to actually experience. I make decisions based on likely impact on others outside of my perception-cone, such as strangers I'll never meet or interact with, and who may well be figments of the mass-media's imagination.

Whether these un-meetable person-placeholders in my imagined decision-consequence timeline are contemporaneous but physically removed, or distantly removed in time is kind of irrelevant.

Comment author: Clarity 09 August 2015 01:52:05AM 0 points [-]

I wonder what this philosophical stance is called?