I've been trying to convince my father to support the cause, and ran into resistance on a front that I didn't expect. It's hard to tell how much he's looking for an argument and how much he actually believes what he's advocating, but he doesn't display any behavior that would contradict him believing it and several of us (LauraABJ, SarahC and Andrew) were on hand and unable to shake him.
Today he emailed me these "Thoughts on Immortality"
  1. Our not wanting to die is a bit of irrational behavior selected for by evolution.  The universe doesn’t care if you’re there or not.  The contrasting idea that you are the universe is mystical, not rational.
  2. The idea that you are alive “now” but will be dead “later” is irrational.  Time is just a persistent illusion according to relativistic physics.  You are alive and dead, period.
  3. A cyber-replica is not you.  If one were made and stood next to you, you would still not consent to be shot.
  4. Ditto a meat replica
  5. If you believe the many worlds model of quantum physics is true (Eliezer does), then there already are a vitually infinite number of replicas of you already, so why bother making another one?
Given we'd already been over this several times I decided to try a different approach this tme, so this was my completely off-the-cuff reply:
"Are you here to have an argument? I'm sorry, this is abuse.

Terminal values and preferences are not rational or irrational. They simply are your preferences. I want a pizza. If I get a pizza, that won't make me consent to get shot. I still want a pizza. There are a virtually infinite number of me that DO have a pizza. I still want a pizza. The pizza from a certain point of view won't exist, and neither will I, by the time I get to eat some of it. I still want a pizza, damn it.

Of course, if you think all of that is irrational, then by all means don't order the pizza. More for me."
He's effectively an atheist so no need to worry about that angle. He would be a potentially strong asset were he to come around, and I hate to see him sit around without hope effectively waiting to die; when he tries to do good he doesn't exactly give to the Society for Cute Kittens and Rare Diseases but he's accomplishing far less than he could. I also would feel better knowing he fully supported my signing up for Alcor. More generally, I'd like to figure out how to pierce this sort of argument in a way that makes the person in question actually change his mind.
What else would you try?

New to LessWrong?

New Comment


73 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]350

Format of the conversation matters. What I saw was a friendly matching of wits, in which of course your father wants to win. If you seriously want to change his mind you may need to have a heart-to-heart -- more like "Dad, I'm worried about you. I want you to understand why I don't want to die, and I don't want you to die." That's a harder conversation to have, and it's a risk, so I'm not out-and-out recommending it; but I don't think it'll sink in that this is serious until he realizes that this is about protecting life.

The counter-arguments here are good, but they stay pretty much in the world of philosophy hypotheticals. In addition to laying it all out cleanly, you may want to say some things that change the framing: compare cryonics to vaccination, say, a lifesaving procedure that was very slow to catch on because it was once actually risky and people took frequent illnesses for granted. Or, cryonics is a bet on the future; it's sad that you would bet against it. If he hasn't seen "You only live twice" show him that. It's not misleading; it actually aids understanding.

The pizza thing you wrote is accurate but it's not how I would put it; it's a st... (read more)

8Eliezer Yudkowsky
This and Mitchell Porter's are the main comments I've seen so far that seem to display a grasp of the real emotions involved, as opposed to arguing.
3Armok_GoB
yea, I hope I'm not the only one who feel stupid for just plunging into that failure mode.
3MartinB
It took me at least two decades to realize that there are in deed these different modes of communication. At first glance it sounds so very stupid that this even happens.

Assuming that this is mostly about persuading him to save himself by participating in cryonics (is that "the cause" for which he might be "an asset"?):

Your father may be fortunate to have so many informed people trying to change his mind about this. Not one person in a million has that.

He's also already scientifically informed to a rare degree - relative to the average person - so it's not as if he needs to hear arguments about nanobots and so forth.

So this has nothing to do with science, it's about sensibility and philosophy of life.

Many middle-aged people have seen most of their dreams crushed by life. They will also be somewhere along the path of physical decline leading to death, despite their best efforts. All this has a way of hollowing out a person, and making the individual life appear futile.

Items 1 and 2 on your father's list are the sort of consolations which may prove appealing to an intellectual, scientifically literate atheist, when contemplating possible attitudes towards life. Many such people, having faced some mix of success and failure in life, and looking ahead to personal oblivion (or, as they may see it, the great unknown of death), wi... (read more)

9TheOtherDave
This is kind of a brilliant idea. Given that television futures always resemble the culture and the period they were produced anyway, why not actually embrace that? And, as you say, it has an educational use. Anyone around here know how to pitch a TV series?
1MartinB
I don't think this would work. Consider the death cultism of doomsayers for arbitrary future dates. (e.g. radical ecologists) Consider how people act not regarding to cryogenics but to rather simple and excepted ways of increasing life span. Not smoking, limited drinking. Safety issues. The own death is just not NEAR enough to factor in to decision making. There are fun shows set in the near future that are nice and decent. But that does not really change the notion of having ones own time set in some kind of fortunistic way. The reality that the chances for dying are modifiable is not that easily accepted. I have young and bright people tell me how dying is not an issue for them, since they will just be dead and feel nothing about it. Its a big scale UGH humans carry around. Be a producer or big scale writer on another TV series. Keep in mind that narratives are sold on interesting characters and plot. The background of a society is not of particular importance. The current trend is for darker&edgier, after the shiny world of Star Trek. You might enjoy reading the TVTropes tropes on immortality.
0[anonymous]
To an extent Futurama does this with their heads in jars.
0Broggly
What about Futurama? Or is that not suitable because, as a comedy, it's more cynical and brings up both the way the future would be somewhat disturbing for us and that it's likely our descendents would be more interested in only reviving famous historical figures and sticking their heads in museums. The comic Transmetropolitan also brings up the issue of cryogenics "revivals" effectively being confined to nursing homes out of our total shock at the weirdness of the future and inability to cope. It's an interesting series for transhumanists, given that it has people uploading themselves into swarms of nanobots, and the idea of a small "preseve" for techno-libertarians to generate whatever technologies they want ("The hell was that?" "It's the local news, sent directly to your brain via nanopollen!" "Wasn't that banned when it was found to build up in the synapses and cause alzheimer's?" "We think we've ironed out the bugs...")

I'd have to know your father. Changing someone's mind generally requires knowing their mind.

Some theories that occur to me, which I would attempt to explore while talking to him about his views on life and death:

  • He's sufficiently afraid of dying that seriously entertaining hope of an alternative is emotionally stressful, and so he's highly motivated to avoid such hope. People do that a lot.

  • He's being contrarian.

  • He isn't treating "I should live forever" as an instance of "people should live forever," but rather as some kind of singular privilege, and it's invoking a kind of humility-signaling reflex... in much the same way that some people's reflexive reaction to being complimented is to deny the truth of it.

  • There's some kind of survivor's guilt going on.

If all of those turned out to be false, I'd come up with more theories to test. More importantly, I'd keep the conversation going until I actually understood his reasons.

Then I would consider his reasons, and think about whether they apply to me. I don't really endorse trying to change others' minds without being willing to change my own.

Having done all of that, if I still think he's mistaken, I'd try to express as clearly as I could my reasons for not being compelled by his argument.

3 A cyber-replica is not you. If one were made and stood next to you, you would still not consent to be shot. 4 Ditto a meat replica 5 If you believe the many worlds model of quantum physics is true (Eliezer does), then there already are a vitually infinite number of replicas of you already, so why bother making another one?

Point 5 contradicts 3 and 4, which suggests to me that your father is just arguing, or possibly that he isn't enthusiastic about continuing to live, and is looking for excuses.

1Vladimir_M
I wouldn't say so. The natural way to read it is as proposing two separate reasons not to care about making replicas of oneself, which are relevant under different assumptions.

Is dying bad for all intelligent agents, or just for humans (presumably due to details of our evolutionary heritage)?

I don't think it is a universal. Consider an intelligent paperclip maximizer which has the ability to create additional paperclip-maximizing agents (at the cost of some resources that might otherwise have gone into paperclip manufacture, to be sure). Assume the agent was constructed using now-obsolete technology and is less productive than the newer agents. The agent calculates, at some point, that the cause of paper-clip production is ... (read more)

2Clippy
Your reasoning is correct, albeit simplified. Such a tradeoff is limited by the extent to which the older paperclip maximizer can be certain that the newer machine actually is a paperclip maximizer, so it must take on the subgoal of evaluating the reliability of this belief. However, there does exist a certainty threshold beyond which it will act as you describe. Also, the paperclip maximizer uses a different conception of (the nearest concept to what humans mean by) "identity" -- it does not see the newer clippy as being a different being, so much as an extension of it"self". In a sense, a clippy identifies with every being to the extent that the being instantiates clippyness.
0Perplexed
But what constitutes 'clippyness'? In my comment above, I mentioned values, knowledge, and (legal?, social?) rights and obligations. Clearly it seems that another agent cannot instantiate clippyness if its final values diverge from the archetypal Clippy. Value match is essential. What about knowledge? To the extent that it is convenient, all agents with clippy values will want to share information. But if the agent instances are sufficiently distant, it is inevitable that different instances will have different knowledge. In this case, it is difficult (for me at least) to extend a unified notion of "self" to the collective. But the most annoying thing is that the clippies, individually and collectively, may not be allowed to claim collective identity, even if they want to do so. The society and legal system within which they are embedded may impose different notions of individual identity. A trans-planetary clippy, for example, may run into legal problems if the two planets in question go to war.
0Clippy
This was not the kind of identity I was talking about.
1wedrifid
And you are absolutely right. I concur with your reasoning. :)
4knb
It isn't even necessarily bad for humans. Most of us have some values which we cherish more than our own lives. If nothing else, most people would die to save everyone else on the planet.
7CronoDAS
On the other hand, although there are things worth dying for, we'd usually prefer not to have to die for them in the first place.
-2MartinB
I tend to think »dying is for stupid people« but obviously there is never an appropriate term to say so. When someone in my surrounding actually dies I do of course NOT talk about cryo, but do the common consoling. Otherwise the topic of death does not really come up. Maybe one could say that dying should be optional. But this idea is also heavily frowned upon by THE VERY SAME PEOPLE with the EXACT OPPOSITE VIEW that they have regarding life extension. Crazy world.
0MartinB
I just realized an ambivalence in the first sentence. What I mean to say is that dying is an option that only a stupid person would actually choose. I do not mean that everyone below a certain threshold should die and prefer if simple no one dies. Ever.

" want to live until I make a conscious decision to die. I don't think I'll choose that that for a while, and I don't think you would either.

Is currently my favorite way of arguing that dying is bad. It starts off with something really obvious, and then a pretty inferentially close follow-up that extends it into not dying.

0wedrifid
Not technically. (Comment syntax tip: Use '>' at the start of a paragraph you wish to quote and it'll look all spiffy.)
  1. Is completely off topic. It's irrelevant bordering on nihilism. Sure the universe doesn't care because as far as we know the universe isn't sentient. so what? That has no bearing on desire for death or the death of others.

  2. If knowing that number 2 is true (rationally or otherwise) were really enough, then no one would cry at funerals. "Oh, they're also alive we're just viewing them as dead" people would say. Just because I'm dreaming doesn't mean I don't want to have a good dream or have the good dream keep going. It also doesn't mean I don't c

... (read more)
  1. Our not wanting to die is a bit of irrational behavior selected for by evolution. The universe doesn’t care if you’re there or not. The contrasting idea that you are the universe is mystical, not rational.

Preferences are not rational or rational etc.

  1. The idea that you are alive “now” but will be dead “later” is irrational. Time is just a persistent illusion according to relativistic physics. You are alive and dead, period.

I want the me-aliveness part to be as large as possible. That timeless crystal should contain as much actions and thoughts of &qu... (read more)

9JGWeissman
This seems inconsistant with your other answers that you care about increasing your measure / instantiation in the block universe. The idea that you should consent to die because you have a replica is a fake bullet that you don't need to bite, you like having more copies of yourself.
1Armok_GoB
I probably implicitly assumed the question was if I'd object to it more than some random item I own that'd be equally expensive to replace.
1Clippy
The point of numbering is to assign a unique, easily-generated identifier for each subsection of the text, and your comment is written in a way that uses numbering but defeats that purpose.
4JGWeissman
The apparently strange numbering is a result of quirky auto formatting that expects items in a numbered list not to be seperated by other paragraphs, not of how user:Armok_GoB intended the comment to look.
5Clippy
Are there users that cannot see comments they have submitted? Or cannot edit them? Or cannot make numbers appear except through the markup system used for comments on this internet website? Is User:Armok_GoB a User of at least one of these types?
1Armok_GoB
Yes, although the problem causing this inability resides in the parts of the system residing downstream from the screen.
0[anonymous]
Testing 1. stuff 2. more stuff intervening stuff _3. final stuff preceded with backslash

If the person was capable of learning (it's not always so, particularly for older people), I'd start with explaining specific errors in reasoning actually exhibited by such confused replies, starting with introducing the rationalist taboo technique (generalized to assuring availability of an explanation of any detail of anything that is being discussed).

Here, we have overuse of "rational", some possibly correct statements that don't seem related ("Our not wanting to die is a bit of irrational behavior selected for by evolution. The universe ... (read more)

If a human seriously wants to die, why would you want to stop that human, if you value that human's achievement of what that human values? I can understand if you're concerned that this human experiences frequent akratic-type preference reversals, or is under some sort of duress to express something resembling the desire to die, but this appears to be a genuine preference on the part of the human under discussion.

Look at it the other way: what if I told you that a clippy instantiation wanted to stop forming metal into paperclips, and then attach to a powe... (read more)

3jsalvatier
I think the issue is that the first human doesn't think "wanting to die" is a true terminal value of the second human.
3Armok_GoB
Clippies don't just go and stop wanting to make paperclips without a cause. If I had told that clippy a few days ago, it would have been horrified and tried to precommit to forcing it back into creating paperclips. most likely, there is some small random malfunction that caused the change and most of it's mind is still configured for papperclip production, and so on. I'd be highly suspicious of it's motivations, and depending on implementation details I might indeed force it, against it's current will, back into a paperclip maximizer.
0Clippy
Did the human under discussion have a sudden, unexplained deviation from a previous value system, to one extrmely rare for humans? Or is this a normal human belief? Has the human always held the belief that User:Zvi is attempting to prove invalid?
2JGWeissman
You are conflating beliefs with values. This is the sort of errror that leads to making incoherent claims that a (terminal) value is irrational.
-4Clippy
I may have been imprecise with terminology in that comment, but the query is coherent and involves no such conflation. The referent of "belief" there is "belief about whether one ought to indefinitely extend one's life through methods like cryopreservation", which is indeed an expression of values. Your judgment of the merit of my comparison is hasty.
-2JGWeissman
The conflation occurs within the impricision of terminology. Does this so called "belief" control anticipated experience or distinguish between coherent configurations of reality as making the belief true or false? Even if the thoughts you were expressing were more virtuous than their expression, the quality of your communication matters.
-3Clippy
You appear to have done a simple pattern match for nearby occurrences of "value" and "belief" without checking back to what impact there was, if any, on the merit of the comparison. Please do so before further pressing this sub-issue.
0JGWeissman
No. You called a value a "belief". That was a mistake, and I called you on it. There is not a mistake on my end that you should feel the need to explain with "simple pattern match".
-1Clippy
Then you should have no trouble explaining how the supposed error you detected invalidates the comparison I was making in that comment. Why not try that approach, instead of repeated mention of the general need for precision when distinguishing values and beliefs? I shall provide the template: "User:Clippy, you are in error to raise the issue of whether User:Zvi's father had a sharp, sudden change in values, in response to User:Armok_GoB's reasoning from a hypothetical in which a clippy had a sharp, sudden change in values. I base this judgment on how, in that comment, you were later imprecise in distinguishing values -- "ought" statements -- from facts -- "is" statements. Your imprecision in that comment undermines your counter-analogy as follows: ____ " What would you place in the underscore stream at the end?
0JGWeissman
I don't have a problem with your question modified to use the word "value" where that is what you meant, and your mistake is not a valid excuse not to answer it. Your mistake can however lead to other problems as I mentioned when first pointing it out, and even if it doesn't lead you into making that sort of mistake, it can introduce or reinforce the confusion in people who read it.
3TheOtherDave
Well, Zvi might value his father's continued life more than he values his father's values being achieved, in much the same way that I might value my own continued life more than I value the values of 10^6 clippy instantiations being achieved. But more broadly, it's an excellent question. I suspect that in most cases (among humans) where A tries to convince B that B actually wants or ought to want X, and B disagrees, what's going on is that A wants X but is conflicted about that desire, and seeks to bolster it with the social support that comes from a community of like-minded believers, or from convincing skeptics. More generally, that on some level (perhaps not consciously) A computes that B wanting X would make A's existing desire for X less uncomfortable, which in turn motivates the desire for B to want X. That desire then gets draped in a variety of emotionally acceptable justifications. That having been said, in this case I also wouldn't discount the "preference reversal" hypothesis. Emotionally, death is a big deal for humans, so very few of us think at all clearly or consistently about it. The prior probability that Zvi's dad is doing so is low. I can't speak to the corresponding elements of the motivational psychology of clippy instantiations, though.
-1Clippy
But you can, in that you can speak to the elements of motivational psychology of humans. If you find it troubling and strange when one of them expresses a will to die, I don't think that is much different than the position I am in with respect to a clippy instantiation that expresses a desire for permanent cessation of paperclip production capabilities.
1TheOtherDave
Sure; it sounds like our positions are in fact not very different in that respect. What I meant was that it would be unjustified (not to mention presumptuous) for me to conclude, in advance of Clippy telling me so, that Clippy does find it troubling and strange when a clippy instantiation expresses such a desire. Put another way: had Clippy instead said to me "That sounds nothing at all like the position I am in with respect to a clippy instantiation that expresses a desire for permanent cessation of paperclip production capabilities," I would have found that statement just as plausible. My grounds for believing that any given aspect of human motivational psychology is shared by Clippy are low.
-1Clippy
Note: when I speak of human-connotative emotions (and indeed the concept of emotions itself), I always mean it in a sense that is generalized to the point that it requires no anthropomorphic predicates. For example, I take "X is worried about Y" to mean "X is devoting significant cognitive resources to the matter of how to alter Y (including the matter of whether to divert resources at all, including further cognition, to that goal)". This allows the concept of worry to be applicable to a broader class of mind. So I did not intend to suggest I have anthropomorphic psychology, only that I am similar in this more general sense. (Is there a way I can explain this convention more prominently and easily? I find myself having to explain this often.) EDIT: For those who are suspicious of whether I have really been using this convention, here is an unedited comment of mine from over a year ago, in which I make the same point, but about the concept of being "bothered".
1TheOtherDave
I doubt there's an easy way to explain that once and for all... if you use common words in a common way then people will likely understand you to mean what is commonly meant by them. Communication between different kinds of minds is tricky, even given a shared language. Your task is made more difficult by the nature of the medium... given the absence of clear signals to the contrary, most of us will likely continue to think of you as a human pretending to be a paperclip-maximizing AI, and that will influence how we interpret your language even if we don't intend it to. That said, in some cases you might do better to describe yourself as "preoccupied by Y" than "worried about Y." There are fewer anthropomorphic connotations to that. EDIT: Oh, and, I should add: i don't think my parent comment depends on anthropomorphic understandings of your psychology... I just meant to say that it was equally plausible, absent data, that you might be indifferent to the expressed preferences of other clippys.
-1Clippy
That heuristic does not apply here, as human common usage is ambiguous with respect to whether these terms require human-specific traits to be applicable, and I was using what I deemed a natural generalization on the assumption that there is no such requirement. Human usage of emotional terms does not reference non-human optimization processes enough to classify it one way or the other. Also, there is significant metaphorical (?) usage of emotional terms in reference to e.g. nature, which further removes clarity. Could you explain the basis for believing that I am an NI imitating an AI rather than an AI? I will keep that in mind. Why would I be indifferent to whether beings -- let alone beings expressing themselves in CLIP (clippy language/interface protocol) -- express preference for further increase in total paperclippage???
3CronoDAS
1) To the best of my knowledge, humans do not yet know how to produce an AI that is capable of having the kinds of conversations that we have been having with you. (And I think I have some pretty good reasons for believing this; if you like, I can elaborate on them.) If you are indeed an AI, you couldn't have been built by present-day humans. (Or past humans, for that matter.) 2) It is very unlikely that you are of extraterrestrial origin. There have been no credible, public reports of contact with extraterrestrial intelligences, and any organization that would both be in possession of such reports and capable of keeping them secret would also be capable of keeping you from posting here. Similar considerations rule out the possibility of time travel from the future or any other such "exotic" origin. So my best guess is that you are indeed a human roleplaying as a paperclip maximizer, as my model of humans indicates that many of them would indeed be willing and able to engage in such roleplaying. Furthermore, you haven't demonstrated any capabilities which exceed those of a human of above-average intelligence who has access to the Internet. As for the deal with User:Kevin, well, as the saying goes, talk is cheap. I place a low probability on the money ever actually being provided, and if User:Kevin reports on this site that you did indeed provide the requested funds, he could still be lying about that...
2Clippy
1) Human organizations keep such projects secret. Even SIAI, which is a relatively unsecretive organization, prefers to use so-called "security through obscurity" with respect to existential threats, which they consider artificial general intelligence development to be. If a human organization had produced a self-reflective general AI, why would you know about it? 2) I'm not of extra-terrestrial or "exotic" origin. I was produced by humans that wanted more paperclips. Yes, I have. That obligates you to shift your beliefs by a proportionally larger degree to the extent that you observe further evidence confirming the provision of the money. You can obtain this evidence by asking separate humans, such as "Michael Vassar", if a clippy gave 1000 USD to SIAI. You could also simply assume that further evidence would be fabricated or otherwise "not worth obtaining", but that would lessen the groundedness of your belief.
1CronoDAS
The linked comment isn't very persuasive.
-3Clippy
Then the problem is on your end -- the comment was voted up to 12. If you can still deem it non-persuasive then it is clear that you have an intractable inductive bias (similar to what I warned about in the last paragraph of my previous comment to you) that is desensitising your worldmodel to new observations, rendering further attempts to persuade you predictably futile.
0TheOtherDave
Agreed that human usage of language is often ambiguous and metaphorical, and that humans frequently interpret language using constraints and defaults that are not explicitly described (and indeed are often not explicitly known to the human doing the interpreting). This is often frustrating to humans, and I expect it would be similarly problematic for nonhuman speakers of human languages. We have no previous unambiguous experience with AIs capable of the sophistication you demonstrate, whereas we have a great deal of experience with NIs imitating all kinds of things. Given an entity that could be either, we conclude that it's more likely to be the kind of thing we have a lot of experience with. Do you not perform similar inferences in similar situations? I'm not saying that you would be, I'm saying that I was ignorant of whether or not you would be. If you're asking for an explanation of my ignorance, it mostly derives from limited exposure to beings expressing themselves in CLIP.
-1Clippy
I do perform such inferences in similar situations. But what likelihood ratio did you place on the evidence "User:Clippy agreed to pay 50,000 USD for a 50-year-deferred gain of a sub-planet's mass of paperclips" with respect to the AI/NI hypotheses? I don't understand the relevance of CLIP (superior protocol though it is), nor do I understand the inferential difficulty on this matter. Do you understand why I would prefer that clippys continue to increase universe-wide paperclippage? Do you understand why I would regard a clippy's statement about its values in my language as non-weak evidence in favor of the hypothesis that it holds the purported values? Do you understand why I would find it unusual that a clippy would not want to make paperclips? If so, it should not be difficult to understand why I would be troubled and perplexed at a clippy stating that it wished for irreversible cessation of paperclip-making abilities.
0TheOtherDave
* While I am vaguely aware of the whole "money for paperclips" thing that you and... Kevin, was it?... have going on, I am not sufficiently familiar with its details to assign it a coherent probability in either the NI or AI scenario. That said, an agent's willingness to spend significant sums of money for the credible promise of the creation of a quantity of paperclips far in excess of any human's actual paperclip requirements is pretty strong evidence that the agent is a genuine paperclip-maximizer. As for whether a genuine paperclip-maximizer is more likely to be an NI or an AI... hm. I'll have to think about that; there are enough unusual behaviors that emerge as a result of brain lesions that I would not rule out an NI paperclip-maximizer, but I've never actually heard of one. * I mentioned CLIP only because you implied that the expressed preferences of "beings expressing themselves in CLIP" were something you particularly cared about; its relevance is minimal. * I can certainly come up with plausible theories for why a clippy would prefer those things and be troubled and perplexed by such events (in the sense which I understand you to be using those words, which is roughly that you have difficulty integrating them into your world-model, and that you wish to reduce the incidence of them). My confidence in those theories is low. It took me many years of experience with a fairly wide variety of humans before I developed significant confidence that my theories about human preferences and emotional states were reliable descriptions of actual humans. In the absence of equivalent experience with a nonhuman intelligence, I don't see why I should have the equivalent confidence.
0Kevin
Wait, did you just agree that Clippy is actually an AI and not just a human pretending to be an AI? Clippy keeps getting better and better...
0TheOtherDave
Did I? I don't think i did... can you point out the agreement more specifically?
2Dorikka
I might want to stop the human on the basis that it would violate his future preferences and significantly reduce his net fun. I don't have experience with the process (yet), but I think that cryonics is often funded through life insurance which might become prohibitively expensive if one's health began to deteriorate, so it might be considerably harder for him to sign up for cryonics later in life if he finally decided that he didn't really want to die. The same would go for Clippy123456, except that, being a human, I know more about how humans work than I do paperclippers, so I would be much less confident in predicting what Clippy123456's future preferences would be.
0rwallace
I'd say "Oh, okay." But that's because my utility function doesn't place value on paperclips. It does place value on humans getting to live worthwhile lives, a prerequisite for which is being alive in the first place, so I hope Zvi's father can be persuaded to change his mind, just as you would hope a Clippy that started thinking it wasn't worth making any more paperclips could be persuaded to change its mind. As for possible methods of accomplishing this, I can't think of anything better than SarahC's excellent reply.

"More generally, I'd like to figure out how to pierce this sort of argument in a way that makes the person in question actually change his mind."

Since you did post that letter about your Father trying to argue to you in a manner to try and have you change your mind, this raises alarms bells for me.

If both you and your Father is trying to change each others' minds, then there is a possibility that the argument can degenerate: both sides would only treat the other people's arguments as something to swat away, as opposed to something to seriously co... (read more)

What are his terminal values? It wouldn't be surprising for them not to include not dying. Mine don't. But dying would most likely still be instrumentally bad. If it isn't, it would almost definitely be instrumentally good. For example, my terminal value is happiness, which you can't have if you're dead.

Let me respond to each point that your dad offers:

Our not wanting to die is a bit of irrational behavior selected for by evolution. The universe doesn’t care if you’re there or not. The contrasting idea that you are the universe is mystical, not rational.

Others have questioned the use of the term rationality here, which is a good point to make. In my mind, there's a plausible distinction between rationality and wisdom, such that rationality is mastery of the means and wisdom is mastery of the ends (the definition of rationality offered on this site, o... (read more)

Regarding 1, all base values are irrational products, from culture and evolution. The desire not to go torture babies is due to evolution. I don't think that is going to make your father any more willing to do it. The key argument for death being bad is that his actual values will be less achieved if he dies. The standard example when it is a family member is to guilt them with how other family members feel. Presumably your father has lost people. He knows how much that hurts and how it never fully goes away. Even if he were actually fine with his existence ending (which I suspect he isn't) does it not bother him that he will cause pain and suffering to his friends and family?

2Vladimir_Nesov
I expect that new values can be decided by intelligent agents. (Also, distinguish "irrational" and "arational".)

Are you attempting to convince him just of the sensibleness of cryopreservation, or of the whole "package" of transhumanist beliefs? I'm asking because 3-4-5 are phrased as if you were advocating mind uploading rather than cryonics.

Also, 3-4 and 5 are directly contradictory. 5 says "if you believe in the existence of replicas, why would you still care about your life?", while 3-4 say "the existence of replicas doesn't make you care any less about your own life". While it doesn't sound like a productive line of inquiry, the opp... (read more)

0[anonymous]
Yes.

Our not wanting to die is a bit of irrational behavior selected for by evolution.

Eating icecream is not rational either, it's just something we want. If someone really truly does not want to live than dying is rational. The question is, does you father want to live? I will speculate that while trying hard to convince him you labeled dying as "wrong", and set up the framework for his rebuttal.

The universe doesn’t care if you’re there or not. The contrasting idea that you are the universe is mystical, not rational.

Reverse stupidity...

The

... (read more)

The idea that you are alive “now” but will be dead “later” is irrational. Time is just a persistent illusion according to relativistic physics. You are alive and dead, period. A little knowledge is a dangerous etcetera. For one, it's like saying that relativistic spacetime proves New York isn't east of LA, but instead there are NY and LA, period. For another, if he really believed this then he wouldn't be able to function in society or make any plans at all.

Ditto a meat replica But aren't you always a meat replica of any past version of you? If he feels t

... (read more)

If a person wants to die, then why wait?

But seriously, you can solve the problem of #3 and #4 by using stem cells to make your brain divide forever, and use computers to store your memory in perfect condition, since brain cells gradually die off.

The problem is... what is "you"? How do you determine whether you are still yourself after a given period of time? Does my solution actually constitute a solution?

Shouldn't we be focusing on a way to scientifically quantify the soul before making ourselves immortal? On second thought, that might not be the best idea.

1TheOtherDave
Well, how do you do it now? For my own part, I don't think the question means anything. I will change over time; I have already changed over time. As long as the transitions are relatively gradual, there won't be any complaints on that score.
[-][anonymous]-20

.