Alicorn comments on Normal Cryonics - Less Wrong

58 Post author: Eliezer_Yudkowsky 19 January 2010 07:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (930)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 26 January 2010 05:44:50AM *  4 points [-]

My psychological need is weird and might be very hard to arrange to satisfy or predict what would be satisfactory

No it's not. It's just scary.

generic needs for care and affection in a small child are so obvious

You really, really think that this, on the one hand, is "obvious", but on the other hand, a superintelligence is going to look inside your head and go, "Huh, I just can't figure that out."

YOU ARE A SMALL CHILD. We all are. I know that, why can't everyone see it?

Comment author: Alicorn 26 January 2010 06:07:26AM *  2 points [-]

No it's not. It's just scary.

I'm going to outright ignore you on this one. I have been met with incredulity, not mere curiosity ("Can you tell us more about the experiences you've had that let you model this extreme need?"), let alone commiseration ("wow, me too! let's make friends and sign up together and solve each other's problems!") when I have described this need here. This tells me that what I have going on is really weird and nobody here has accurately modeled it. I do not think you can make predictions about this characteristic of mine when you are still so confused about it. A FAI probably could. You aren't one. And since I know more about the phenomenon than you, I'm going to trust my predictions about what the FAI would say on inspecting my brain over yours. I think it'd say "wow, she would not hold up well without any loved ones nearby for longer than a few hours, unless I messed with her in ways she would not approve."

YOU ARE A SMALL CHILD. We all are. I know that, why can't everyone see it?

You're raving. Perhaps you are deficient in a vitamin or mineral.

Comment author: Eliezer_Yudkowsky 26 January 2010 07:36:48AM 5 points [-]

I am not incredulous that you want friends! I am incredulous that you think not even a superintelligence could get them for you! This has nothing to do with you and your needs and your private inner life and everything to do with superintelligence! It wouldn't even have to do anything creepy! Human beings are simply not that complicated!

Comment author: thomblake 27 January 2010 06:53:37PM 5 points [-]

Upvoted because: with that many exclamation points, how could you be wrong?

Comment author: LucasSloan 26 January 2010 06:20:49AM 1 point [-]

You think the best thing a FAI could do would be to throw up its hands and say, "welp, she's screwed"?

Comment author: Jordan 26 January 2010 06:29:09AM 1 point [-]

Why not? There are likely problems we think are impossible that a superintelligence will be able to solve. But there are also likely problems we think impossible which turn out to actually be impossible.

Comment author: LucasSloan 26 January 2010 06:33:42AM *  1 point [-]

I am very confident that an FAI could, if necessary create a person to order, who would be perfectly tuned to becoming someone's friend in a few hours. How often does this kind of thing happen by accident in kindergarten?

Impossibility should be reserved for things like FTL and reversal of entropy, not straightforward problems of human interaction.

Comment author: Alicorn 26 January 2010 06:35:13AM 1 point [-]

an FAI could, if necessary create a person to order, who would be perfectly tuned to becoming someone's friend in a few hours.

Dude, creeeeeeeeeeepy.

Comment author: LucasSloan 26 January 2010 06:38:39AM 1 point [-]

That's a worst case scenario. Even if necessary, are you willing to die so as to avoid a little creeeeeeeeeeepiness? Honestly, don't you value your life? Why are you so willing to assume that super intelligence can't think of any better solutions than you can?

Comment author: Alicorn 26 January 2010 06:41:12AM 1 point [-]

In principle, I'm willing to die to prevent the unethical creation of a person. (I might not act in accordance with this principle if I were presented with a very immediate threat to my survival, which I could avert by unethically creating a person; but the threats here are not immediate enough to cause me to so compromise my ethics.)

Comment author: LucasSloan 26 January 2010 06:45:06AM 1 point [-]

Why would the creation of such a person be unethical? Eir life would be worth living, and ey would make you happy as well. Human instincts around creepiness are not good metrics when discussing morality.

Comment author: Alicorn 26 January 2010 06:50:59AM 0 points [-]

I think that people should be created by other persons who are motivated, at least in part, by an expectation to intrinsically value the person so created. If a FAI created a person for the express purpose of being my friend, it would presumably expect to value the person intrinsically, but that wouldn't be its motivation in creating the person; its motivation in creating the person would have to do with valuing me. And if it modified its motivations to avoid annoying me in this way before it created the person, that would probably have other consequences on its actions that I wouldn't care for, like motivating it to go around creating lots of persons left and right because people are just so darned intrinsically valuable and more are needed.

Comment author: JGWeissman 26 January 2010 06:39:57AM 1 point [-]

Would it be less creepy if the FAI found an existing person, out of the billions available, with whom you would be very likely to make friends in a few hours?

Comment author: Alicorn 26 January 2010 06:42:46AM 0 points [-]

That would be fine, and the possibility has already been covered (it was described, I think, as "super-Facebook") but I wouldn't bet on it. Frankly, I'm not even sure I'm comfortable with the level of mind-reading the AI would have to do to implement any of these finer-tuned solutions. I like my mental privacy.

Comment author: Jordan 26 January 2010 07:35:42AM 2 points [-]

I'm not sure mind reading would be necessary. I hear Netflix does a pretty good job of guessing which movies people would like.

Comment author: JGWeissman 26 January 2010 06:50:03AM 1 point [-]

I like my mental privacy too, but I am OK with the idea of a non-sentient FAI reading my mind to better predict what it can do for me.

Comment author: Alicorn 26 January 2010 06:56:16AM 0 points [-]

I don't have much expectation of non-sentience in a sufficiently smart AI.

Comment author: LucasSloan 26 January 2010 06:47:39AM 1 point [-]

You like your mental privacy vis-a-vis an (effectively) omnipotent, perfectly moral being, more than you value your life?

Comment author: Alicorn 26 January 2010 06:55:48AM 1 point [-]

*thinks*

I value the ability to consciously control which of my preferences are acted on that much. Mental privacy qua mental privacy, perhaps not.

Comment author: MichaelGR 26 January 2010 06:47:08AM -1 points [-]

A "user-friendly" way to do this would be for the FAI to send an avatar/proxy to act as a guide when you wake up. Explain how things work, introduce you to others who you might enjoy the company off, answer any question you might have, help you get set up in a way that works for you, help you locate people who you know that might be alive, etc.

A FAI would know better than we do what we find creepy/uncomfortable/etc, and would probably avoid it as much as possible.

Comment author: Alicorn 26 January 2010 06:25:30AM 0 points [-]

Nope. The best thing it could do would be retrieve my dead friends and family. But if we're talking about whether I should sign up for cryonics, I'm assuming that's the only way somebody gets to be not dead after having died a while ago. If we have an AI that's so brilliant that it can reconstruct people accurately just by looking at the causal history of the universe and extrapolating backwards, I'm safe whether I sign up or not! And if we have one that can't, I think I'm only safe if I am signed up with at least one loved one.

Comment author: Kaj_Sotala 26 January 2010 05:36:04PM 2 points [-]

The best thing it could do would be retrieve my dead friends and family.

Out of curiosity - how accurate would the retrieval need to be? For instance, suppose the FAI accessed your memories and reconstructed your friends based on the information found there, extrapolating the bits you didn't know. Obviously they wouldn't be the same people, since the FAI had to make up a lot of stuff neither you nor it knew. But since the main model was a fit to your memories, they'd still seem just like your friends to you. Would you find that acceptable?

Comment author: Alicorn 26 January 2010 05:48:31PM 1 point [-]

No. That would not be okay with me, assuming I knew this about the process.

Comment author: ciphergoth 26 January 2010 05:40:52PM 1 point [-]

My initial reaction is that I would really hate this. It's one of the things that makes me really uneasy about extreme "neural archaeology"-style cryonics: I want an actual reconstruction, not just a plausible one.

Comment author: LucasSloan 26 January 2010 06:27:50AM 2 points [-]

You can think of no scenarios between those two that would entice you to sign up? Your arguments seem really specious to me.

Comment author: Alicorn 26 January 2010 06:30:34AM 0 points [-]

You can think of no scenarios between those two that would entice you to sign up?

Nope. You're welcome to try, though, if you value my life and don't want to try the "befriend me while signed up or on track to become so" route via which several wonderful people are helping.

Comment author: Vladimir_Nesov 27 January 2010 10:40:33AM 0 points [-]

I think the right context for Eliezer's comment is Expected Creative Surprises.