timtyler comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: Peter_de_Blanc 03 March 2010 01:01:47AM 2 points [-]

I agree that with the right precautions, running an unfriendly superintelligence for 1,000 ticks and then shutting it off is possible. But I can't think of many reasons why you would actually want to. You can't use diagnostics from the trial run to help you design the next generation of AIs; diagnostics provide a channel for the AI to talk at you.

Comment author: timtyler 03 March 2010 09:23:01AM 1 point [-]

The given reason is paranoia. If you are concerned that a runaway machine intelligence might accidentally obliterate all sentient life, then a machine that can shut itself down has gained a positive safety feature.

In practice, I don't think we will have to build machines that regularly shut down. Nobody regularly shuts down Google. The point is that - if we seriously think that there is a good reason to be paranoid about this scenario - then there is a defense that is much easier to implement than building a machine intelligence which has assimilated all human values.

I think this dramatically reduces the probability of the "runaway machine accidentally kills all humans" scenario.