Eliezer_Yudkowsky comments on Normal Cryonics - Less Wrong

58 Post author: Eliezer_Yudkowsky 19 January 2010 07:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (930)

You are viewing a single comment's thread. Show more comments above.

Comment author: AdeleneDawner 26 January 2010 05:59:59AM *  3 points [-]

My psychological need is weird and might be very hard to arrange to satisfy or predict what would be satisfactory

No it's not. It's just scary.

Am I parsing this correctly? You're intending to say that Alicorn isn't really experiencing what she's reporting that she is, but is instead just making it up to avoid acknowledging a fear of cryonics?

That's fairly obviously wrong: If Alicorn really was scared of cryonics, the easiest thing for her to do would be to ignore the discussions, not try to solve her stated problem.

It's also pretty offensive for you to keep suggesting that. Do you really think you're in a better position to know about her than she's in to know about herself? You're implying a severe lack of insight on her part when you say things like that.

Comment author: Eliezer_Yudkowsky 26 January 2010 07:05:24AM *  5 points [-]

I am not suggesting that Alicorn is anything other than what she thinks she is.

But when she suggests that she has psychological problems a superintelligence can't solve, she is treading upon my territory. It is not minimizing her problem to suggest that, honestly, human brains and their emotions would just not be that hard for a superintelligence to understand, predict, or place in a situation where happiness is attainable.

There simply isn't anything Alicorn could feel, or any human brain could feel, which justifies the sequitur, "a superintelligence couldn't understand or handle my problems!" You get to say that to your friends, your sister, your mother, and certainly to me, but you don't get to shout it at a superintelligence because that is silly.

Human brains just don't have that kind of complicated in them.

I am not suggesting any lack of self-insight whatsoever. I am suggesting that Alicorn lacks insight into superintelligences.

Comment author: AdeleneDawner 26 January 2010 09:11:26AM *  3 points [-]

I see at least one plausible case where an AI couldn't solve the problem: All it takes is for none of Alicorn's friends to be cryopreserved and for it to require significantly more than 5 hours for her brain to naturally perform the neurological changes involved in going from considering someone a stranger to considering them a friend. (I'm assuming that she'd consider speeding up that process to be an unacceptable brain modification. ETA: And that being asked if a particular solution would be acceptable is a significant part of making that solution acceptable, such that suggested solutions would not be acceptable if they hadn't already been suggested. (This is true for me, but may not be similarly true for Alicorn.))

Comment author: Alicorn 26 January 2010 07:12:27AM 3 points [-]

psychological problems

That's a... nasty way to describe one of my thousand shards of desire that I want to ensure gets satisfied.

Comment author: Eliezer_Yudkowsky 26 January 2010 07:20:30AM *  4 points [-]

Your desire isn't the problem. Maybe it was poorly phrased; "psychological challenge" or "psychological task for superintelligence to perform" or something like that. The problem is finding you a friend, not eliminating your desire for one. Sorry that this happened to match a common phrase with a different meaning.

Comment author: Kevin 26 January 2010 07:26:43AM *  1 point [-]

It's just a phrase. If someone isn't being intentionally hurtful, you should remind yourself that a lot of what we are doing here is linguistic games.

This argument might have already gone on too long, but I'm going to try stating as what I see as your main objection to see if I actually understand your true objection.

You hold not having your consciousness altered or manipulated or otherwise tinkered with as an extremely high value. You think you'll probably be miserable in the future and you find it hard to believe that the FAI will find you a friend comparable to your current friends. You won't want to accept any type of brain modification or enhancement that would make you not miserable. If you're sufficiently miserable, it's likely than a FAI could change you without your consent, and you prefer death to the chance of that happening.

Comment author: Alicorn 26 January 2010 07:32:27AM 1 point [-]

You hold not having your consciousness altered or manipulated or otherwise tinkered with as an extremely high value.

Insert "without my conscious, deliberate, informed consent, and ideally agency".

You think you'll probably be miserable in the future

Replace "you'll probably" with "you are reasonably likely to".

and you find it hard to believe that the FAI will find you a friend comparable to your current friends.

Add "with whom I could become sufficiently close within a brief and critical time period".

You won't want to accept any type of brain modification or enhancement that would make you not miserable.

See first adjustment. n.b.: without my already having been modified, the "informed" part would probably take longer than the brief, critical time period.

If you're sufficiently miserable, it's likely than a FAI could change you without your consent

Yes. Or, perhaps not change me, but prevent me from acting to end my misery in a non-brain-tinkery way.

and you prefer death to the chance of that happening.

For certain subvalues of "that", yes.