timtyler comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 03 March 2010 08:44:56AM *  1 point [-]

Well, I think I went into most of this already in my "stopping superintelligence" essay.

Stopping is one of the simplest possible desires - and you have a better chance of being able to program that in than practically anything else.

I gave several proposals to deal with the possible issues associated with stopping at an unknown point resulting in plans beyond that point still being executed by minions or sub-contractors - including scheduling shutdowns in advance, ensuring a period of quiescence before the shutdown - and not running for extended periods of time.

Comment author: wedrifid 04 March 2010 12:33:47AM 0 points [-]

Stopping is one of the simplest possible desires - and you have a better chance of being able to program that in than practically anything else.

It does seem to be a safety precaution that could reduce the consequences of some possible flaws in an AI design.