wedrifid comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 03 March 2010 12:52:32AM 4 points [-]

Given 30 seconds thought I can come up with ways to ensure that the universe is altered in the direction of my goals in the long term even if I happen to cease existing at a known time in the future. I expect an intelligence that is more advanced than I to be able to work out a way to substantially modify the future despite a 'red button' deadline. The task of making the AI respect the 'true spirit of a planned shutdown' shares many difficulties of the FAI problem itself.

Comment author: orthonormal 03 March 2010 03:39:46AM 1 point [-]

You might say it's an FAI-complete problem, in the same way "building a transhuman AI you can interact with and keep boxed" is.

Comment author: timtyler 03 March 2010 08:48:00AM *  1 point [-]

You think building a machine that can be stopped is the same level of difficulty as building a machine that reflects the desires of one or more humans while it is left on?

I beg to differ - stopping on schedule or on demand is one of the simplest possible problems for a machine - while doing what humans want you to do while you are switched on is much trickier.

Only the former problem needs to be solved to eliminate the spectre of a runaway superintelligence that fills the universe with its idea of utility against the wishes of its creator.

Comment author: LucasSloan 03 March 2010 06:55:06PM 1 point [-]
Comment author: wedrifid 03 March 2010 03:44:07AM 1 point [-]

Exactly, I like the terminology.

Comment author: timtyler 03 March 2010 08:44:56AM *  1 point [-]

Well, I think I went into most of this already in my "stopping superintelligence" essay.

Stopping is one of the simplest possible desires - and you have a better chance of being able to program that in than practically anything else.

I gave several proposals to deal with the possible issues associated with stopping at an unknown point resulting in plans beyond that point still being executed by minions or sub-contractors - including scheduling shutdowns in advance, ensuring a period of quiescence before the shutdown - and not running for extended periods of time.

Comment author: wedrifid 04 March 2010 12:33:47AM 0 points [-]

Stopping is one of the simplest possible desires - and you have a better chance of being able to program that in than practically anything else.

It does seem to be a safety precaution that could reduce the consequences of some possible flaws in an AI design.