Dr_Manhattan comments on Welcome to Less Wrong! (5th thread, March 2013) - Less Wrong

27 Post author: orthonormal 01 April 2013 04:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1750)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dr_Manhattan 17 September 2013 09:05:45PM 3 points [-]

Someone who is currently altruistic towards humanity should

Wei, the question here is would rather than should, no? It's quite possible that the altruism that I endorse as a part of me is related to my brain's empathy module, much of which might be broken if I see cannot relate to other humans. There are of course good fictional examples of this, e.g. Ted Chiang's "Understand" - http://www.infinityplus.co.uk/stories/under.htm and, ahem, Watchmen's Dr. Manhattan.

Comment author: Eliezer_Yudkowsky 17 September 2013 10:24:39PM 2 points [-]

Logical fallacy: Generalization from fictional evidence.

A high-fidelity upload who was previously altruistic toward humanity would still be altruistic during the first minute after awakening; their environment would not cause this to change unless the same sensory experiences would have caused their previous self to change.

If you start doing code modification, of course, some but not all bets are off.

Comment author: Dr_Manhattan 18 September 2013 02:42:08AM 3 points [-]

Well, I did put a disclaimer by using the standard terminology :) Fiction is good for suggesting possibilities, you cannot derive evidence from it of course.

I agree on the first-minute point, but do not see why it's relevant, because there is the 999999th minute by which value drift will take over (if altruism is strongly related to empathy). I guess upon waking up I'd make value preservation my first order of business, but since an upload is still evolution's spaghetti code it might be a race against time.

Comment author: MugaSofer 20 September 2013 06:42:03PM 0 points [-]

Perhaps the idea is that the sensory experience of no longer falling into the category of "human" would cause the brain to behave in unexpected ways?

I don't find that especially likely, mind, although I suppose long-term there might arise a self-serving "em supremacy" meme.

Comment author: Bugmaster 18 September 2013 03:46:30AM -1 points [-]

their environment would not cause this to change unless the same sensory experiences would have caused their previous self to change.

I don't see why this is necessarily true, unless you treat "altruism toward humanity" as a terminal goal.

When I was a very young child, I greatly valued my brightly colored alphabet blocks; but today, I pretty much ignore them. My mind had developed to the point where I can fully visualize all the interesting permutations of the blocks in my head, should I need to do so for some reason.

Comment author: somervta 18 September 2013 08:30:35AM 1 point [-]

I don't see why this is necessarily true, unless you treat "altruism toward humanity" as a terminal goal.

Well, yes. I think that's the point. I certainly don't only value other humans for the way that they interest me - If that were so, I probably wouldn't care about most of them at all. Humanity is a terminall value to me - or, more generally, the existence and experiences of happy, engaged, thinking sentient beings. Humans qualify, regardless of whether or not uploads exist (and, of course, also qualify.

Comment author: Bugmaster 19 September 2013 10:37:30PM 0 points [-]

How do you know that "the existence and experiences of happy, engaged, thinking sentient beings" is indeed one of your terminal values, and not an instrumental value ?

Comment author: Bugmaster 17 September 2013 09:31:49PM 1 point [-]

+1 for linking to Understand ; I remembered reading the story long ago, but I forgot the link. Thanks for reminding me !