JGWeissman comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: magfrump 03 February 2010 01:35:55AM 1 point [-]

The altruistic choice is clear

If the AI created enough simulations, it could potentially be more altruistic not to.

On the other hand pressing "reset" or smashing the computer should stop the torture, necessarily making it more altruistic if humanity lives forever, versus not if ems are otherwise unobtainable and humanity is doomed.

Comment author: JGWeissman 03 February 2010 05:15:00AM 1 point [-]

I was assuming a reasonable chance at humanity developing an FAI given the containment of this rogue AI. This small chance, multiplied by all the good that an FAI could do with the entire galaxy, let alone the universe, should outweigh the bad that can be done within Earth-bound computational processes.

I believe that a less convenient world that counters this point would take the problem out of the interesting context.