wedrifid comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 17 November 2011 07:20:04AM 1 point [-]

Why is it automatically "bad" to create an AGI that causes human exinction?

It isn't automatically bad. I just don't want it. This is why I said your answer is legitimately "No".

Comment author: Logos01 17 November 2011 07:26:14AM 1 point [-]

Fair enough.

Honest question: If our flesh were dissolved overnight and we instead were instantiated inside a simulated environment -- without our permission -- would you consider this a Friendly outcome?

Comment author: wedrifid 17 November 2011 07:58:23AM 1 point [-]

Potentially, depending on the simulated environment.

Comment author: Logos01 17 November 2011 08:34:56AM 0 points [-]

Assume Earth-like or video-game-like (in the latter-case including 'respawns').

Comment author: wedrifid 17 November 2011 08:48:43AM *  0 points [-]

Video game upgrades! Sounds good.

Comment author: Logos01 17 November 2011 09:12:46AM 0 points [-]

I believe you mean, "I'm here to kick ass and chew bubblegum... and I'm all outta gum!"