timtyler comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 03 March 2010 09:23:01AM 1 point [-]

The given reason is paranoia. If you are concerned that a runaway machine intelligence might accidentally obliterate all sentient life, then a machine that can shut itself down has gained a positive safety feature.

In practice, I don't think we will have to build machines that regularly shut down. Nobody regularly shuts down Google. The point is that - if we seriously think that there is a good reason to be paranoid about this scenario - then there is a defense that is much easier to implement than building a machine intelligence which has assimilated all human values.

I think this dramatically reduces the probability of the "runaway machine accidentally kills all humans" scenario.