Alicorn comments on Normal Cryonics - Less Wrong

58 Post author: Eliezer_Yudkowsky 19 January 2010 07:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (930)

You are viewing a single comment's thread. Show more comments above.

Comment author: byrnema 26 January 2010 05:24:45AM *  0 points [-]

I hope you don't mind the clarification, but I think you've underestimated the extent to which I negatively value a scenario in which my daughter comes to mental anguish that I cannot experience with her. (For example, I'm not too concerned about the satisfactory family unit, as long as my daughter is psychologically healthy.)

This compared to death, which is terrible for reasons other than "death". Terrible because I will miss her and because of all the relationships disconnected and because her potential living this life won't be fulfilled -- nothing that cryonics will give back.

It seems like the stream of consciousness of a person is greatly valued here on Less Wrong, for its own sake independent of relationships. Could you/someone write something to help me relate to that?

Comment author: Alicorn 26 January 2010 05:32:53AM *  3 points [-]

I hope you don't mind the clarification, but I think you've underestimated the extent to which I negatively value a scenario in which my daughter comes to mental anguish that I cannot experience with her. (For example, I'm not too concerned about a satisfactory family unit, as long as my daughter is psychologically healthy.)

I realize this is probably weird coming from me, considering my own cryonics hangup, but we're already assuming they won't revive anyone they can't render passably physically healthy - I think they'd make some effort to take the same precautions regarding psychological health. My psychological need is weird and might be very hard to arrange to satisfy or predict what would be satisfactory; generic needs for care and affection in a small child are so obvious I would be astounded if the future didn't have an arrangement in place before they revived any frozen children.

It seems like the stream of consciousness of a person is greatly valued here on Less Wrong, for its own sake independent of relationships. Could you write something to help me relate to that?

I'll try, but I'm not sure exactly what you mean by "the stream of consciousness" or "independent of relationships". I value me (my software), I value you (your software), I prefer that these softwares be executed in pleasant environments rather than sitting around statically - but then, I'd probably cease to value my software in an awful hurry if it had no relationships with other software, and I'd respect a preference on your part to end your own software execution if that seemed to be your real and reasoned desire.

Why do I have these values? Well, people are just so darned special, that's all I can say.

Comment author: Eliezer_Yudkowsky 26 January 2010 05:44:50AM *  4 points [-]

My psychological need is weird and might be very hard to arrange to satisfy or predict what would be satisfactory

No it's not. It's just scary.

generic needs for care and affection in a small child are so obvious

You really, really think that this, on the one hand, is "obvious", but on the other hand, a superintelligence is going to look inside your head and go, "Huh, I just can't figure that out."

YOU ARE A SMALL CHILD. We all are. I know that, why can't everyone see it?

Comment author: AdeleneDawner 26 January 2010 05:59:59AM *  3 points [-]

My psychological need is weird and might be very hard to arrange to satisfy or predict what would be satisfactory

No it's not. It's just scary.

Am I parsing this correctly? You're intending to say that Alicorn isn't really experiencing what she's reporting that she is, but is instead just making it up to avoid acknowledging a fear of cryonics?

That's fairly obviously wrong: If Alicorn really was scared of cryonics, the easiest thing for her to do would be to ignore the discussions, not try to solve her stated problem.

It's also pretty offensive for you to keep suggesting that. Do you really think you're in a better position to know about her than she's in to know about herself? You're implying a severe lack of insight on her part when you say things like that.

Comment author: Eliezer_Yudkowsky 26 January 2010 07:05:24AM *  5 points [-]

I am not suggesting that Alicorn is anything other than what she thinks she is.

But when she suggests that she has psychological problems a superintelligence can't solve, she is treading upon my territory. It is not minimizing her problem to suggest that, honestly, human brains and their emotions would just not be that hard for a superintelligence to understand, predict, or place in a situation where happiness is attainable.

There simply isn't anything Alicorn could feel, or any human brain could feel, which justifies the sequitur, "a superintelligence couldn't understand or handle my problems!" You get to say that to your friends, your sister, your mother, and certainly to me, but you don't get to shout it at a superintelligence because that is silly.

Human brains just don't have that kind of complicated in them.

I am not suggesting any lack of self-insight whatsoever. I am suggesting that Alicorn lacks insight into superintelligences.

Comment author: AdeleneDawner 26 January 2010 09:11:26AM *  3 points [-]

I see at least one plausible case where an AI couldn't solve the problem: All it takes is for none of Alicorn's friends to be cryopreserved and for it to require significantly more than 5 hours for her brain to naturally perform the neurological changes involved in going from considering someone a stranger to considering them a friend. (I'm assuming that she'd consider speeding up that process to be an unacceptable brain modification. ETA: And that being asked if a particular solution would be acceptable is a significant part of making that solution acceptable, such that suggested solutions would not be acceptable if they hadn't already been suggested. (This is true for me, but may not be similarly true for Alicorn.))

Comment author: Alicorn 26 January 2010 07:12:27AM 3 points [-]

psychological problems

That's a... nasty way to describe one of my thousand shards of desire that I want to ensure gets satisfied.

Comment author: Eliezer_Yudkowsky 26 January 2010 07:20:30AM *  4 points [-]

Your desire isn't the problem. Maybe it was poorly phrased; "psychological challenge" or "psychological task for superintelligence to perform" or something like that. The problem is finding you a friend, not eliminating your desire for one. Sorry that this happened to match a common phrase with a different meaning.

Comment author: Kevin 26 January 2010 07:26:43AM *  1 point [-]

It's just a phrase. If someone isn't being intentionally hurtful, you should remind yourself that a lot of what we are doing here is linguistic games.

This argument might have already gone on too long, but I'm going to try stating as what I see as your main objection to see if I actually understand your true objection.

You hold not having your consciousness altered or manipulated or otherwise tinkered with as an extremely high value. You think you'll probably be miserable in the future and you find it hard to believe that the FAI will find you a friend comparable to your current friends. You won't want to accept any type of brain modification or enhancement that would make you not miserable. If you're sufficiently miserable, it's likely than a FAI could change you without your consent, and you prefer death to the chance of that happening.

Comment author: Alicorn 26 January 2010 07:32:27AM 1 point [-]

You hold not having your consciousness altered or manipulated or otherwise tinkered with as an extremely high value.

Insert "without my conscious, deliberate, informed consent, and ideally agency".

You think you'll probably be miserable in the future

Replace "you'll probably" with "you are reasonably likely to".

and you find it hard to believe that the FAI will find you a friend comparable to your current friends.

Add "with whom I could become sufficiently close within a brief and critical time period".

You won't want to accept any type of brain modification or enhancement that would make you not miserable.

See first adjustment. n.b.: without my already having been modified, the "informed" part would probably take longer than the brief, critical time period.

If you're sufficiently miserable, it's likely than a FAI could change you without your consent

Yes. Or, perhaps not change me, but prevent me from acting to end my misery in a non-brain-tinkery way.

and you prefer death to the chance of that happening.

For certain subvalues of "that", yes.

Comment author: Alicorn 26 January 2010 06:07:26AM *  2 points [-]

No it's not. It's just scary.

I'm going to outright ignore you on this one. I have been met with incredulity, not mere curiosity ("Can you tell us more about the experiences you've had that let you model this extreme need?"), let alone commiseration ("wow, me too! let's make friends and sign up together and solve each other's problems!") when I have described this need here. This tells me that what I have going on is really weird and nobody here has accurately modeled it. I do not think you can make predictions about this characteristic of mine when you are still so confused about it. A FAI probably could. You aren't one. And since I know more about the phenomenon than you, I'm going to trust my predictions about what the FAI would say on inspecting my brain over yours. I think it'd say "wow, she would not hold up well without any loved ones nearby for longer than a few hours, unless I messed with her in ways she would not approve."

YOU ARE A SMALL CHILD. We all are. I know that, why can't everyone see it?

You're raving. Perhaps you are deficient in a vitamin or mineral.

Comment author: Eliezer_Yudkowsky 26 January 2010 07:36:48AM 5 points [-]

I am not incredulous that you want friends! I am incredulous that you think not even a superintelligence could get them for you! This has nothing to do with you and your needs and your private inner life and everything to do with superintelligence! It wouldn't even have to do anything creepy! Human beings are simply not that complicated!

Comment author: thomblake 27 January 2010 06:53:37PM 5 points [-]

Upvoted because: with that many exclamation points, how could you be wrong?

Comment author: LucasSloan 26 January 2010 06:20:49AM 1 point [-]

You think the best thing a FAI could do would be to throw up its hands and say, "welp, she's screwed"?

Comment author: Jordan 26 January 2010 06:29:09AM 1 point [-]

Why not? There are likely problems we think are impossible that a superintelligence will be able to solve. But there are also likely problems we think impossible which turn out to actually be impossible.

Comment author: LucasSloan 26 January 2010 06:33:42AM *  1 point [-]

I am very confident that an FAI could, if necessary create a person to order, who would be perfectly tuned to becoming someone's friend in a few hours. How often does this kind of thing happen by accident in kindergarten?

Impossibility should be reserved for things like FTL and reversal of entropy, not straightforward problems of human interaction.

Comment author: Alicorn 26 January 2010 06:35:13AM 1 point [-]

an FAI could, if necessary create a person to order, who would be perfectly tuned to becoming someone's friend in a few hours.

Dude, creeeeeeeeeeepy.

Comment author: LucasSloan 26 January 2010 06:38:39AM 1 point [-]

That's a worst case scenario. Even if necessary, are you willing to die so as to avoid a little creeeeeeeeeeepiness? Honestly, don't you value your life? Why are you so willing to assume that super intelligence can't think of any better solutions than you can?

Comment author: Alicorn 26 January 2010 06:41:12AM 1 point [-]

In principle, I'm willing to die to prevent the unethical creation of a person. (I might not act in accordance with this principle if I were presented with a very immediate threat to my survival, which I could avert by unethically creating a person; but the threats here are not immediate enough to cause me to so compromise my ethics.)

Comment author: JGWeissman 26 January 2010 06:39:57AM 1 point [-]

Would it be less creepy if the FAI found an existing person, out of the billions available, with whom you would be very likely to make friends in a few hours?

Comment author: Alicorn 26 January 2010 06:42:46AM 0 points [-]

That would be fine, and the possibility has already been covered (it was described, I think, as "super-Facebook") but I wouldn't bet on it. Frankly, I'm not even sure I'm comfortable with the level of mind-reading the AI would have to do to implement any of these finer-tuned solutions. I like my mental privacy.

Comment author: Alicorn 26 January 2010 06:25:30AM 0 points [-]

Nope. The best thing it could do would be retrieve my dead friends and family. But if we're talking about whether I should sign up for cryonics, I'm assuming that's the only way somebody gets to be not dead after having died a while ago. If we have an AI that's so brilliant that it can reconstruct people accurately just by looking at the causal history of the universe and extrapolating backwards, I'm safe whether I sign up or not! And if we have one that can't, I think I'm only safe if I am signed up with at least one loved one.

Comment author: Kaj_Sotala 26 January 2010 05:36:04PM 2 points [-]

The best thing it could do would be retrieve my dead friends and family.

Out of curiosity - how accurate would the retrieval need to be? For instance, suppose the FAI accessed your memories and reconstructed your friends based on the information found there, extrapolating the bits you didn't know. Obviously they wouldn't be the same people, since the FAI had to make up a lot of stuff neither you nor it knew. But since the main model was a fit to your memories, they'd still seem just like your friends to you. Would you find that acceptable?

Comment author: Alicorn 26 January 2010 05:48:31PM 1 point [-]

No. That would not be okay with me, assuming I knew this about the process.

Comment author: ciphergoth 26 January 2010 05:40:52PM 1 point [-]

My initial reaction is that I would really hate this. It's one of the things that makes me really uneasy about extreme "neural archaeology"-style cryonics: I want an actual reconstruction, not just a plausible one.

Comment author: LucasSloan 26 January 2010 06:27:50AM 2 points [-]

You can think of no scenarios between those two that would entice you to sign up? Your arguments seem really specious to me.

Comment author: Alicorn 26 January 2010 06:30:34AM 0 points [-]

You can think of no scenarios between those two that would entice you to sign up?

Nope. You're welcome to try, though, if you value my life and don't want to try the "befriend me while signed up or on track to become so" route via which several wonderful people are helping.

Comment author: Vladimir_Nesov 27 January 2010 10:40:33AM 0 points [-]

I think the right context for Eliezer's comment is Expected Creative Surprises.

Comment author: byrnema 26 January 2010 05:56:45AM 0 points [-]

I like people too. :)

I agree with Eliezer that any benevolent reviver would be able to figure out how to create conditions that would make a child (and you) happy.

I definitely have in mind a non-benevolent reviver.

Comment author: Jordan 26 January 2010 06:21:44AM *  4 points [-]

Consider this hypothetical situation:

Medical grade nanobots capable of rendering people immortal exist. They're a one time injection that protect you from all disease forever. Do you and your family accept the treatment? If so, you're essentially guaranteeing your family will survive until the singularity, at which point a malevolent singleton might take over the universe and do all sorts of nasty things to you.

I agree that cryonics is scarier than the hypothetical, but the issue at hand isn't actually different.

Comment author: byrnema 26 January 2010 06:34:01AM *  -1 points [-]

Children are only helpless for about 10 years. If the singleton came within 10 years of my child being born without warning, it would be awful but not my fault. If I had any warning of it coming, and I still chose to have children that then came to harm, it would be my fault.

Comment author: Jordan 26 January 2010 07:06:59AM 3 points [-]

Why does fault matter?

Comment author: byrnema 26 January 2010 05:38:11PM *  1 point [-]

Good question. The reason is because that this has recently become an ethical problem for me rather than an optimization problem. Perhaps that is why I think of it in far mode, if that is what I'm doing. But I do know that in ethical mode, it can be the case that you're no longer allowed to base a decision on the computed "average value" ... even small risks or compromises might be unacceptable. If I allow my child to come to harm, and I'm not allowed to do that, then it doesn't matter what advantage I'm gambling for. I perceive at a certain age they can make their own decision, and then with relief I may sign them up for cryonics at their request.