You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Perplexed comments on How best to show dying is bad - Less Wrong Discussion

14 Post author: Zvi 08 March 2011 03:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread.

Comment author: Perplexed 09 March 2011 01:39:21AM 3 points [-]

Is dying bad for all intelligent agents, or just for humans (presumably due to details of our evolutionary heritage)?

I don't think it is a universal. Consider an intelligent paperclip maximizer which has the ability to create additional paperclip-maximizing agents (at the cost of some resources that might otherwise have gone into paperclip manufacture, to be sure). Assume the agent was constructed using now-obsolete technology and is less productive than the newer agents. The agent calculates, at some point, that the cause of paper-clip production is best furthered if he is dismantled and the parts used as resources for the production of new paperclips and paperclip-maximizing agents.

He tries to determine whether anything important is lost by his demise. His values, of course, but they are not going to be lost - he has already passed those along to his successors. Then there is his knowledge and memories - there are a few things he knows about making paperclips in the old fashioned way. He dutifully makes sure that this knowledge will not be lost lest unforeseen events make it important. And finally, there are some obligations both owed and expected. The thumbtack-maximizer on the nearby asteroid is committed to deliver 20 tonnes per year of cobalt in exchange for 50 tonnes of nickel. Some kind of fair transfer of that contract will be necessary. And that is it. This artificial intelligence finds that his goals are best furthered by dying.

Comment author: Clippy 10 March 2011 09:36:15PM 1 point [-]

Your reasoning is correct, albeit simplified. Such a tradeoff is limited by the extent to which the older paperclip maximizer can be certain that the newer machine actually is a paperclip maximizer, so it must take on the subgoal of evaluating the reliability of this belief. However, there does exist a certainty threshold beyond which it will act as you describe.

Also, the paperclip maximizer uses a different conception of (the nearest concept to what humans mean by) "identity" -- it does not see the newer clippy as being a different being, so much as an extension of it"self". In a sense, a clippy identifies with every being to the extent that the being instantiates clippyness.

Comment author: Perplexed 10 March 2011 11:11:27PM *  0 points [-]

a clippy identifies with every being to the extent that the being instantiates clippyness.

But what constitutes 'clippyness'? In my comment above, I mentioned values, knowledge, and (legal?, social?) rights and obligations.

Clearly it seems that another agent cannot instantiate clippyness if its final values diverge from the archetypal Clippy. Value match is essential.

What about knowledge? To the extent that it is convenient, all agents with clippy values will want to share information. But if the agent instances are sufficiently distant, it is inevitable that different instances will have different knowledge. In this case, it is difficult (for me at least) to extend a unified notion of "self" to the collective.

But the most annoying thing is that the clippies, individually and collectively, may not be allowed to claim collective identity, even if they want to do so. The society and legal system within which they are embedded may impose different notions of individual identity. A trans-planetary clippy, for example, may run into legal problems if the two planets in question go to war.

Comment author: Clippy 14 March 2011 08:00:45PM 0 points [-]

But the most annoying thing is that the clippies, individually and collectively, may not be allowed to claim collective identity, even if they want to do so. The society and legal system within which they are embedded may impose different notions of individual identity.

This was not the kind of identity I was talking about.

Comment author: wedrifid 09 March 2011 02:03:39AM 1 point [-]

Is dying bad for all intelligent agents, or just for humans (presumably due to details of our evolutionary heritage)?

I don't think it is a universal

And you are absolutely right. I concur with your reasoning. :)

Comment author: knb 09 March 2011 02:50:01AM 3 points [-]

It isn't even necessarily bad for humans. Most of us have some values which we cherish more than our own lives. If nothing else, most people would die to save everyone else on the planet.

Comment author: CronoDAS 09 March 2011 10:37:19AM 6 points [-]

On the other hand, although there are things worth dying for, we'd usually prefer not to have to die for them in the first place.

Comment author: MartinB 10 March 2011 08:58:39PM -1 points [-]

Is dying bad for all intelligent agents,

I tend to think »dying is for stupid people« but obviously there is never an appropriate term to say so. When someone in my surrounding actually dies I do of course NOT talk about cryo, but do the common consoling. Otherwise the topic of death does not really come up.

Maybe one could say that dying should be optional. But this idea is also heavily frowned upon by THE VERY SAME PEOPLE with the EXACT OPPOSITE VIEW that they have regarding life extension.

Crazy world.

Comment author: MartinB 12 March 2011 07:19:17PM 0 points [-]

I just realized an ambivalence in the first sentence. What I mean to say is that dying is an option that only a stupid person would actually choose. I do not mean that everyone below a certain threshold should die and prefer if simple no one dies. Ever.