Eliezer_Yudkowsky comments on Normal Cryonics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (930)
I realize this is probably weird coming from me, considering my own cryonics hangup, but we're already assuming they won't revive anyone they can't render passably physically healthy - I think they'd make some effort to take the same precautions regarding psychological health. My psychological need is weird and might be very hard to arrange to satisfy or predict what would be satisfactory; generic needs for care and affection in a small child are so obvious I would be astounded if the future didn't have an arrangement in place before they revived any frozen children.
I'll try, but I'm not sure exactly what you mean by "the stream of consciousness" or "independent of relationships". I value me (my software), I value you (your software), I prefer that these softwares be executed in pleasant environments rather than sitting around statically - but then, I'd probably cease to value my software in an awful hurry if it had no relationships with other software, and I'd respect a preference on your part to end your own software execution if that seemed to be your real and reasoned desire.
Why do I have these values? Well, people are just so darned special, that's all I can say.
No it's not. It's just scary.
You really, really think that this, on the one hand, is "obvious", but on the other hand, a superintelligence is going to look inside your head and go, "Huh, I just can't figure that out."
YOU ARE A SMALL CHILD. We all are. I know that, why can't everyone see it?
Am I parsing this correctly? You're intending to say that Alicorn isn't really experiencing what she's reporting that she is, but is instead just making it up to avoid acknowledging a fear of cryonics?
That's fairly obviously wrong: If Alicorn really was scared of cryonics, the easiest thing for her to do would be to ignore the discussions, not try to solve her stated problem.
It's also pretty offensive for you to keep suggesting that. Do you really think you're in a better position to know about her than she's in to know about herself? You're implying a severe lack of insight on her part when you say things like that.
I am not suggesting that Alicorn is anything other than what she thinks she is.
But when she suggests that she has psychological problems a superintelligence can't solve, she is treading upon my territory. It is not minimizing her problem to suggest that, honestly, human brains and their emotions would just not be that hard for a superintelligence to understand, predict, or place in a situation where happiness is attainable.
There simply isn't anything Alicorn could feel, or any human brain could feel, which justifies the sequitur, "a superintelligence couldn't understand or handle my problems!" You get to say that to your friends, your sister, your mother, and certainly to me, but you don't get to shout it at a superintelligence because that is silly.
Human brains just don't have that kind of complicated in them.
I am not suggesting any lack of self-insight whatsoever. I am suggesting that Alicorn lacks insight into superintelligences.
I see at least one plausible case where an AI couldn't solve the problem: All it takes is for none of Alicorn's friends to be cryopreserved and for it to require significantly more than 5 hours for her brain to naturally perform the neurological changes involved in going from considering someone a stranger to considering them a friend. (I'm assuming that she'd consider speeding up that process to be an unacceptable brain modification. ETA: And that being asked if a particular solution would be acceptable is a significant part of making that solution acceptable, such that suggested solutions would not be acceptable if they hadn't already been suggested. (This is true for me, but may not be similarly true for Alicorn.))
That's a... nasty way to describe one of my thousand shards of desire that I want to ensure gets satisfied.
Your desire isn't the problem. Maybe it was poorly phrased; "psychological challenge" or "psychological task for superintelligence to perform" or something like that. The problem is finding you a friend, not eliminating your desire for one. Sorry that this happened to match a common phrase with a different meaning.
It's just a phrase. If someone isn't being intentionally hurtful, you should remind yourself that a lot of what we are doing here is linguistic games.
This argument might have already gone on too long, but I'm going to try stating as what I see as your main objection to see if I actually understand your true objection.
You hold not having your consciousness altered or manipulated or otherwise tinkered with as an extremely high value. You think you'll probably be miserable in the future and you find it hard to believe that the FAI will find you a friend comparable to your current friends. You won't want to accept any type of brain modification or enhancement that would make you not miserable. If you're sufficiently miserable, it's likely than a FAI could change you without your consent, and you prefer death to the chance of that happening.
Insert "without my conscious, deliberate, informed consent, and ideally agency".
Replace "you'll probably" with "you are reasonably likely to".
Add "with whom I could become sufficiently close within a brief and critical time period".
See first adjustment. n.b.: without my already having been modified, the "informed" part would probably take longer than the brief, critical time period.
Yes. Or, perhaps not change me, but prevent me from acting to end my misery in a non-brain-tinkery way.
For certain subvalues of "that", yes.
I'm going to outright ignore you on this one. I have been met with incredulity, not mere curiosity ("Can you tell us more about the experiences you've had that let you model this extreme need?"), let alone commiseration ("wow, me too! let's make friends and sign up together and solve each other's problems!") when I have described this need here. This tells me that what I have going on is really weird and nobody here has accurately modeled it. I do not think you can make predictions about this characteristic of mine when you are still so confused about it. A FAI probably could. You aren't one. And since I know more about the phenomenon than you, I'm going to trust my predictions about what the FAI would say on inspecting my brain over yours. I think it'd say "wow, she would not hold up well without any loved ones nearby for longer than a few hours, unless I messed with her in ways she would not approve."
You're raving. Perhaps you are deficient in a vitamin or mineral.
I am not incredulous that you want friends! I am incredulous that you think not even a superintelligence could get them for you! This has nothing to do with you and your needs and your private inner life and everything to do with superintelligence! It wouldn't even have to do anything creepy! Human beings are simply not that complicated!
Upvoted because: with that many exclamation points, how could you be wrong?
You think the best thing a FAI could do would be to throw up its hands and say, "welp, she's screwed"?
Why not? There are likely problems we think are impossible that a superintelligence will be able to solve. But there are also likely problems we think impossible which turn out to actually be impossible.
I am very confident that an FAI could, if necessary create a person to order, who would be perfectly tuned to becoming someone's friend in a few hours. How often does this kind of thing happen by accident in kindergarten?
Impossibility should be reserved for things like FTL and reversal of entropy, not straightforward problems of human interaction.
Dude, creeeeeeeeeeepy.
That's a worst case scenario. Even if necessary, are you willing to die so as to avoid a little creeeeeeeeeeepiness? Honestly, don't you value your life? Why are you so willing to assume that super intelligence can't think of any better solutions than you can?
In principle, I'm willing to die to prevent the unethical creation of a person. (I might not act in accordance with this principle if I were presented with a very immediate threat to my survival, which I could avert by unethically creating a person; but the threats here are not immediate enough to cause me to so compromise my ethics.)
Why would the creation of such a person be unethical? Eir life would be worth living, and ey would make you happy as well. Human instincts around creepiness are not good metrics when discussing morality.
Would it be less creepy if the FAI found an existing person, out of the billions available, with whom you would be very likely to make friends in a few hours?
That would be fine, and the possibility has already been covered (it was described, I think, as "super-Facebook") but I wouldn't bet on it. Frankly, I'm not even sure I'm comfortable with the level of mind-reading the AI would have to do to implement any of these finer-tuned solutions. I like my mental privacy.
I'm not sure mind reading would be necessary. I hear Netflix does a pretty good job of guessing which movies people would like.
I like my mental privacy too, but I am OK with the idea of a non-sentient FAI reading my mind to better predict what it can do for me.
You like your mental privacy vis-a-vis an (effectively) omnipotent, perfectly moral being, more than you value your life?
A "user-friendly" way to do this would be for the FAI to send an avatar/proxy to act as a guide when you wake up. Explain how things work, introduce you to others who you might enjoy the company off, answer any question you might have, help you get set up in a way that works for you, help you locate people who you know that might be alive, etc.
A FAI would know better than we do what we find creepy/uncomfortable/etc, and would probably avoid it as much as possible.
Nope. The best thing it could do would be retrieve my dead friends and family. But if we're talking about whether I should sign up for cryonics, I'm assuming that's the only way somebody gets to be not dead after having died a while ago. If we have an AI that's so brilliant that it can reconstruct people accurately just by looking at the causal history of the universe and extrapolating backwards, I'm safe whether I sign up or not! And if we have one that can't, I think I'm only safe if I am signed up with at least one loved one.
Out of curiosity - how accurate would the retrieval need to be? For instance, suppose the FAI accessed your memories and reconstructed your friends based on the information found there, extrapolating the bits you didn't know. Obviously they wouldn't be the same people, since the FAI had to make up a lot of stuff neither you nor it knew. But since the main model was a fit to your memories, they'd still seem just like your friends to you. Would you find that acceptable?
No. That would not be okay with me, assuming I knew this about the process.
My initial reaction is that I would really hate this. It's one of the things that makes me really uneasy about extreme "neural archaeology"-style cryonics: I want an actual reconstruction, not just a plausible one.
You can think of no scenarios between those two that would entice you to sign up? Your arguments seem really specious to me.
Nope. You're welcome to try, though, if you value my life and don't want to try the "befriend me while signed up or on track to become so" route via which several wonderful people are helping.
I think the right context for Eliezer's comment is Expected Creative Surprises.