wedrifid comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 16 November 2011 08:04:21PM *  1 point [-]

Are you trying to make sure a bad Singularity happens?

If Logos is seeking it then I assume it is not something that he considers bad. Presumably because he thinks intelligence is just that cool. Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer "No". (This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)

Comment author: Logos01 17 November 2011 07:11:55AM *  0 points [-]

Pursuing the goal necessarily results in human extinction and tiling the universe with computronium and I call that Bad but he should still answer "No".

As I've asked elsewhere to resoundingly negative results:

Why is it automatically "bad" to create an AGI that causes human exinction? If I value ever-increasingly-capable sentience... why must I be anthropocentric about it? If I were to view recursively improving AGI that is sentient as a 'child' or 'inheritor' of "the human will" -- then why should it be so awful if humanity were to be rendered obsolete or even extinct by it?

I do not, furthermore, define "humanity" in so strict terms as to require that it be flesh-and-blood to be "human". If our FOOMing AGI were a "human" one -- I personally would find it in the range of acceptable outcomes if it converted the available carbon and silicon of the earth into computronium.

Sure, it would suck for me -- but those of us currently alive already die, and over a long enough timeline the survival rate for even the clinically immortal drops to zero.

I ask this question because I feel that it is relevant. Why is "inheritor" non-Friendly AGI "bad"?

(This applies even if I go all cognitive-realism on him and say he is objectively wrong and that he is, in fact, trying to make sure a Bad Singularity happens.)

Caveat: It is possible for me to discuss my own motivations using someone else's valuative framework. So context matters. The mere fact that I would say "No" does not mean that I could never say "Yes -- as you see it."

Comment author: wedrifid 17 November 2011 07:20:04AM 1 point [-]

Why is it automatically "bad" to create an AGI that causes human exinction?

It isn't automatically bad. I just don't want it. This is why I said your answer is legitimately "No".

Comment author: Logos01 17 November 2011 07:26:14AM 1 point [-]

Fair enough.

Honest question: If our flesh were dissolved overnight and we instead were instantiated inside a simulated environment -- without our permission -- would you consider this a Friendly outcome?

Comment author: wedrifid 17 November 2011 07:58:23AM 1 point [-]

Potentially, depending on the simulated environment.

Comment author: Logos01 17 November 2011 08:34:56AM 0 points [-]

Assume Earth-like or video-game-like (in the latter-case including 'respawns').

Comment author: wedrifid 17 November 2011 08:48:43AM *  0 points [-]

Video game upgrades! Sounds good.

Comment author: Logos01 17 November 2011 09:12:46AM 0 points [-]

I believe you mean, "I'm here to kick ass and chew bubblegum... and I'm all outta gum!"