dripgrind comments on Abnormal Cryonics - Less Wrong

56 Post author: Will_Newsome 26 May 2010 07:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (365)

You are viewing a single comment's thread.

Comment author: dripgrind 27 May 2010 08:41:58AM *  2 points [-]

Here's another possible objection to cryonics:

If an Unfriendly AI Singularity happens while you are vitrified, it's not just that you will fail to be revived - perhaps the AI will scan and upload you and abuse you in some way.

"There is life eternal within the eater of souls. Nobody is ever forgotten or allowed to rest in peace. They populate the simulation spaces of its mind, exploring all the possible alternative endings to their life." OK, that's generalising from fictional evidence, but consider the following scenario:

Suppose the Singularity develops from an AI that was initially based on a human upload. When it becomes clear that there is a real possibility of uploading and gaining immortality in some sense, many people will compete for upload slots. The winners will likely be the rich and powerful. Billionaires tend not to be known for their public-spirited natures - in general, they lobby to reorder society for their benefit and to the detriment of the rest of us. So, the core of the AI is likely to be someone ruthless and maybe even frankly sociopathic.

Imagine being revived into a world controlled by a massively overclocked Dick Cheney or Vladimir Putin or Marquis De Sade. You might well envy the dead.

Unless you are certain that no Singularity will occur before cryonics patients can be revived, or that Friendly AI will be developed and enforced before the Singularity, cryonics might be a ticket to Hell.

Comment author: humpolec 27 May 2010 10:23:42AM *  3 points [-]

What you're describing is an evil AI, not just an unFriendly one - unFriendly AI doesn't care about your values. Wouldn't an evil AI be even harder to achieve than a Friendly one?

Comment author: dripgrind 27 May 2010 11:01:26AM 2 points [-]

An unFriendly AI doesn't necessarily care about human values - but I can't see why, if it was based on human neural architecture, it might not exhibit good old-fashioned human values like empathy - or sadism.

I'm not saying that AI would have to be based on human uploads, but it seems like a credible path to superhuman AI.

Why do you think that an evil AI would be harder to achieve than a Friendly one?

Comment author: humpolec 27 May 2010 05:30:59PM 4 points [-]

Agreed, AI based on a human upload gives no guarantee about its values... actually right now I have no idea about how Friendliness of such AI could be ensured.

Why do you think that an evil AI would be harder to achieve than a Friendly one?

Maybe not harder, but less probable - 'paperclipping' seems to be a more likely failure of friendliness than AI wanting to torture humans forever.

I have to admit I haven't thought much about this, though.

Comment author: Baughn 28 May 2010 12:20:31PM 6 points [-]

Paperclipping is a relatively simple failure. The difference between paperclipping and evil is mainly just that - a matter of complexity. Evil is complex, turning the universe into tuna is decidedly not.

On the scale of friendliness, I ironically see an "evil" failure (meaning, among other things, that we're still in some sense around to notice it being evil) becoming more likely as friendliness increases. As we try to implement our own values, failures become more complex, and less likely to be total - thus letting us stick around to see them.

Comment author: wedrifid 02 June 2012 03:21:40AM *  1 point [-]

What you're describing is an evil AI, not just an unFriendly one - unFriendly AI doesn't care about your values. Wouldn't an evil AI be even harder to achieve than a Friendly one?

"Where in this code do I need to put this "-ve" sign again?"

The two are approximately equal in difficulty, assuming equivalent flexibility in how "Evil" or "Friendly" it would have to be to qualify for the definition.