Alicorn comments on Normal Cryonics - Less Wrong

58 Post author: Eliezer_Yudkowsky 19 January 2010 07:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (930)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 26 January 2010 07:05:24AM *  5 points [-]

I am not suggesting that Alicorn is anything other than what she thinks she is.

But when she suggests that she has psychological problems a superintelligence can't solve, she is treading upon my territory. It is not minimizing her problem to suggest that, honestly, human brains and their emotions would just not be that hard for a superintelligence to understand, predict, or place in a situation where happiness is attainable.

There simply isn't anything Alicorn could feel, or any human brain could feel, which justifies the sequitur, "a superintelligence couldn't understand or handle my problems!" You get to say that to your friends, your sister, your mother, and certainly to me, but you don't get to shout it at a superintelligence because that is silly.

Human brains just don't have that kind of complicated in them.

I am not suggesting any lack of self-insight whatsoever. I am suggesting that Alicorn lacks insight into superintelligences.

Comment author: Alicorn 26 January 2010 07:12:27AM 3 points [-]

psychological problems

That's a... nasty way to describe one of my thousand shards of desire that I want to ensure gets satisfied.

Comment author: Eliezer_Yudkowsky 26 January 2010 07:20:30AM *  4 points [-]

Your desire isn't the problem. Maybe it was poorly phrased; "psychological challenge" or "psychological task for superintelligence to perform" or something like that. The problem is finding you a friend, not eliminating your desire for one. Sorry that this happened to match a common phrase with a different meaning.

Comment author: Kevin 26 January 2010 07:26:43AM *  1 point [-]

It's just a phrase. If someone isn't being intentionally hurtful, you should remind yourself that a lot of what we are doing here is linguistic games.

This argument might have already gone on too long, but I'm going to try stating as what I see as your main objection to see if I actually understand your true objection.

You hold not having your consciousness altered or manipulated or otherwise tinkered with as an extremely high value. You think you'll probably be miserable in the future and you find it hard to believe that the FAI will find you a friend comparable to your current friends. You won't want to accept any type of brain modification or enhancement that would make you not miserable. If you're sufficiently miserable, it's likely than a FAI could change you without your consent, and you prefer death to the chance of that happening.

Comment author: Alicorn 26 January 2010 07:32:27AM 1 point [-]

You hold not having your consciousness altered or manipulated or otherwise tinkered with as an extremely high value.

Insert "without my conscious, deliberate, informed consent, and ideally agency".

You think you'll probably be miserable in the future

Replace "you'll probably" with "you are reasonably likely to".

and you find it hard to believe that the FAI will find you a friend comparable to your current friends.

Add "with whom I could become sufficiently close within a brief and critical time period".

You won't want to accept any type of brain modification or enhancement that would make you not miserable.

See first adjustment. n.b.: without my already having been modified, the "informed" part would probably take longer than the brief, critical time period.

If you're sufficiently miserable, it's likely than a FAI could change you without your consent

Yes. Or, perhaps not change me, but prevent me from acting to end my misery in a non-brain-tinkery way.

and you prefer death to the chance of that happening.

For certain subvalues of "that", yes.